00:00:00.001 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v23.11" build number 1041 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3708 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.027 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.028 The recommended git tool is: git 00:00:00.028 using credential 00000000-0000-0000-0000-000000000002 00:00:00.029 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.043 Fetching changes from the remote Git repository 00:00:00.050 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.070 Using shallow fetch with depth 1 00:00:00.070 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.070 > git --version # timeout=10 00:00:00.098 > git --version # 'git version 2.39.2' 00:00:00.098 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.143 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.143 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.504 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.516 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.531 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:03.531 > git config core.sparsecheckout # timeout=10 00:00:03.544 > git read-tree -mu HEAD # timeout=10 00:00:03.560 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:03.581 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:03.582 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:03.674 [Pipeline] Start of Pipeline 00:00:03.687 [Pipeline] library 00:00:03.689 Loading library shm_lib@master 00:00:03.689 Library shm_lib@master is cached. Copying from home. 00:00:03.704 [Pipeline] node 00:00:03.717 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:03.719 [Pipeline] { 00:00:03.730 [Pipeline] catchError 00:00:03.732 [Pipeline] { 00:00:03.744 [Pipeline] wrap 00:00:03.753 [Pipeline] { 00:00:03.760 [Pipeline] stage 00:00:03.762 [Pipeline] { (Prologue) 00:00:03.778 [Pipeline] echo 00:00:03.780 Node: VM-host-SM9 00:00:03.788 [Pipeline] cleanWs 00:00:03.799 [WS-CLEANUP] Deleting project workspace... 00:00:03.799 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.805 [WS-CLEANUP] done 00:00:04.078 [Pipeline] setCustomBuildProperty 00:00:04.181 [Pipeline] httpRequest 00:00:04.571 [Pipeline] echo 00:00:04.576 Sorcerer 10.211.164.20 is alive 00:00:04.583 [Pipeline] retry 00:00:04.584 [Pipeline] { 00:00:04.592 [Pipeline] httpRequest 00:00:04.596 HttpMethod: GET 00:00:04.597 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.597 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.598 Response Code: HTTP/1.1 200 OK 00:00:04.599 Success: Status code 200 is in the accepted range: 200,404 00:00:04.599 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.465 [Pipeline] } 00:00:05.481 [Pipeline] // retry 00:00:05.487 [Pipeline] sh 00:00:05.766 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.779 [Pipeline] httpRequest 00:00:06.407 [Pipeline] echo 00:00:06.408 Sorcerer 10.211.164.20 is alive 00:00:06.414 [Pipeline] retry 00:00:06.415 [Pipeline] { 00:00:06.424 [Pipeline] httpRequest 00:00:06.428 HttpMethod: GET 00:00:06.428 URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:06.429 Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:06.443 Response Code: HTTP/1.1 200 OK 00:00:06.444 Success: Status code 200 is in the accepted range: 200,404 00:00:06.444 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:22.818 [Pipeline] } 00:01:22.835 [Pipeline] // retry 00:01:22.842 [Pipeline] sh 00:01:23.121 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:25.659 [Pipeline] sh 00:01:25.939 + git -C spdk log --oneline -n5 00:01:25.939 c13c99a5e test: Various fixes for Fedora40 00:01:25.939 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:01:25.939 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:01:25.939 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:01:25.939 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:01:25.957 [Pipeline] withCredentials 00:01:25.966 > git --version # timeout=10 00:01:25.976 > git --version # 'git version 2.39.2' 00:01:25.990 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:25.992 [Pipeline] { 00:01:25.999 [Pipeline] retry 00:01:26.001 [Pipeline] { 00:01:26.013 [Pipeline] sh 00:01:26.291 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:26.561 [Pipeline] } 00:01:26.578 [Pipeline] // retry 00:01:26.582 [Pipeline] } 00:01:26.597 [Pipeline] // withCredentials 00:01:26.607 [Pipeline] httpRequest 00:01:26.989 [Pipeline] echo 00:01:26.990 Sorcerer 10.211.164.20 is alive 00:01:26.999 [Pipeline] retry 00:01:27.001 [Pipeline] { 00:01:27.015 [Pipeline] httpRequest 00:01:27.019 HttpMethod: GET 00:01:27.020 URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:27.020 Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:27.026 Response Code: HTTP/1.1 200 OK 00:01:27.026 Success: Status code 200 is in the accepted range: 200,404 00:01:27.027 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:55.085 [Pipeline] } 00:01:55.102 [Pipeline] // retry 00:01:55.110 [Pipeline] sh 00:01:55.389 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:56.773 [Pipeline] sh 00:01:57.102 + git -C dpdk log --oneline -n5 00:01:57.102 eeb0605f11 version: 23.11.0 00:01:57.102 238778122a doc: update release notes for 23.11 00:01:57.102 46aa6b3cfc doc: fix description of RSS features 00:01:57.102 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:57.102 7e421ae345 devtools: support skipping forbid rule check 00:01:57.117 [Pipeline] writeFile 00:01:57.130 [Pipeline] sh 00:01:57.409 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:57.420 [Pipeline] sh 00:01:57.697 + cat autorun-spdk.conf 00:01:57.697 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:57.697 SPDK_TEST_NVMF=1 00:01:57.697 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:57.697 SPDK_TEST_URING=1 00:01:57.697 SPDK_TEST_USDT=1 00:01:57.697 SPDK_RUN_UBSAN=1 00:01:57.698 NET_TYPE=virt 00:01:57.698 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:57.698 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:57.698 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:57.703 RUN_NIGHTLY=1 00:01:57.705 [Pipeline] } 00:01:57.718 [Pipeline] // stage 00:01:57.732 [Pipeline] stage 00:01:57.734 [Pipeline] { (Run VM) 00:01:57.746 [Pipeline] sh 00:01:58.025 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:58.025 + echo 'Start stage prepare_nvme.sh' 00:01:58.025 Start stage prepare_nvme.sh 00:01:58.025 + [[ -n 0 ]] 00:01:58.025 + disk_prefix=ex0 00:01:58.025 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:01:58.025 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:01:58.025 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:01:58.025 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:58.025 ++ SPDK_TEST_NVMF=1 00:01:58.025 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:58.025 ++ SPDK_TEST_URING=1 00:01:58.025 ++ SPDK_TEST_USDT=1 00:01:58.025 ++ SPDK_RUN_UBSAN=1 00:01:58.025 ++ NET_TYPE=virt 00:01:58.025 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:58.025 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:58.025 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:58.025 ++ RUN_NIGHTLY=1 00:01:58.025 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:58.025 + nvme_files=() 00:01:58.025 + declare -A nvme_files 00:01:58.025 + backend_dir=/var/lib/libvirt/images/backends 00:01:58.025 + nvme_files['nvme.img']=5G 00:01:58.025 + nvme_files['nvme-cmb.img']=5G 00:01:58.025 + nvme_files['nvme-multi0.img']=4G 00:01:58.025 + nvme_files['nvme-multi1.img']=4G 00:01:58.025 + nvme_files['nvme-multi2.img']=4G 00:01:58.025 + nvme_files['nvme-openstack.img']=8G 00:01:58.025 + nvme_files['nvme-zns.img']=5G 00:01:58.025 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:58.025 + (( SPDK_TEST_FTL == 1 )) 00:01:58.025 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:58.025 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:58.025 + for nvme in "${!nvme_files[@]}" 00:01:58.025 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:01:58.025 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:58.025 + for nvme in "${!nvme_files[@]}" 00:01:58.025 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:01:58.025 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:58.025 + for nvme in "${!nvme_files[@]}" 00:01:58.025 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:01:58.025 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:58.025 + for nvme in "${!nvme_files[@]}" 00:01:58.025 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:01:58.025 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:58.025 + for nvme in "${!nvme_files[@]}" 00:01:58.025 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:01:58.025 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:58.284 + for nvme in "${!nvme_files[@]}" 00:01:58.284 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:01:58.284 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:58.284 + for nvme in "${!nvme_files[@]}" 00:01:58.284 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:01:58.542 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:58.542 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:01:58.542 + echo 'End stage prepare_nvme.sh' 00:01:58.542 End stage prepare_nvme.sh 00:01:58.554 [Pipeline] sh 00:01:58.832 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:58.832 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -H -a -v -f fedora39 00:01:59.091 00:01:59.091 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:01:59.091 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:01:59.091 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:59.091 HELP=0 00:01:59.091 DRY_RUN=0 00:01:59.091 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img, 00:01:59.091 NVME_DISKS_TYPE=nvme,nvme, 00:01:59.091 NVME_AUTO_CREATE=0 00:01:59.091 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img, 00:01:59.091 NVME_CMB=,, 00:01:59.091 NVME_PMR=,, 00:01:59.091 NVME_ZNS=,, 00:01:59.091 NVME_MS=,, 00:01:59.091 NVME_FDP=,, 00:01:59.091 SPDK_VAGRANT_DISTRO=fedora39 00:01:59.091 SPDK_VAGRANT_VMCPU=10 00:01:59.091 SPDK_VAGRANT_VMRAM=12288 00:01:59.091 SPDK_VAGRANT_PROVIDER=libvirt 00:01:59.091 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:59.091 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:59.091 SPDK_OPENSTACK_NETWORK=0 00:01:59.091 VAGRANT_PACKAGE_BOX=0 00:01:59.091 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:59.091 FORCE_DISTRO=true 00:01:59.091 VAGRANT_BOX_VERSION= 00:01:59.091 EXTRA_VAGRANTFILES= 00:01:59.091 NIC_MODEL=e1000 00:01:59.091 00:01:59.091 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:01:59.091 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:02:01.626 Bringing machine 'default' up with 'libvirt' provider... 00:02:02.583 ==> default: Creating image (snapshot of base box volume). 00:02:02.583 ==> default: Creating domain with the following settings... 00:02:02.583 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733634051_76ce559dbd66be3f217c 00:02:02.583 ==> default: -- Domain type: kvm 00:02:02.583 ==> default: -- Cpus: 10 00:02:02.583 ==> default: -- Feature: acpi 00:02:02.583 ==> default: -- Feature: apic 00:02:02.583 ==> default: -- Feature: pae 00:02:02.583 ==> default: -- Memory: 12288M 00:02:02.583 ==> default: -- Memory Backing: hugepages: 00:02:02.583 ==> default: -- Management MAC: 00:02:02.583 ==> default: -- Loader: 00:02:02.583 ==> default: -- Nvram: 00:02:02.583 ==> default: -- Base box: spdk/fedora39 00:02:02.583 ==> default: -- Storage pool: default 00:02:02.583 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733634051_76ce559dbd66be3f217c.img (20G) 00:02:02.583 ==> default: -- Volume Cache: default 00:02:02.583 ==> default: -- Kernel: 00:02:02.583 ==> default: -- Initrd: 00:02:02.583 ==> default: -- Graphics Type: vnc 00:02:02.583 ==> default: -- Graphics Port: -1 00:02:02.583 ==> default: -- Graphics IP: 127.0.0.1 00:02:02.583 ==> default: -- Graphics Password: Not defined 00:02:02.583 ==> default: -- Video Type: cirrus 00:02:02.583 ==> default: -- Video VRAM: 9216 00:02:02.583 ==> default: -- Sound Type: 00:02:02.583 ==> default: -- Keymap: en-us 00:02:02.583 ==> default: -- TPM Path: 00:02:02.583 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:02.583 ==> default: -- Command line args: 00:02:02.583 ==> default: -> value=-device, 00:02:02.583 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:02:02.583 ==> default: -> value=-drive, 00:02:02.583 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-0-drive0, 00:02:02.583 ==> default: -> value=-device, 00:02:02.583 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:02.583 ==> default: -> value=-device, 00:02:02.583 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:02:02.583 ==> default: -> value=-drive, 00:02:02.583 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:02:02.583 ==> default: -> value=-device, 00:02:02.583 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:02.583 ==> default: -> value=-drive, 00:02:02.583 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:02:02.583 ==> default: -> value=-device, 00:02:02.583 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:02.583 ==> default: -> value=-drive, 00:02:02.583 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:02:02.583 ==> default: -> value=-device, 00:02:02.583 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:02.583 ==> default: Creating shared folders metadata... 00:02:02.583 ==> default: Starting domain. 00:02:03.962 ==> default: Waiting for domain to get an IP address... 00:02:18.852 ==> default: Waiting for SSH to become available... 00:02:19.903 ==> default: Configuring and enabling network interfaces... 00:02:25.182 default: SSH address: 192.168.121.50:22 00:02:25.182 default: SSH username: vagrant 00:02:25.182 default: SSH auth method: private key 00:02:26.561 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:34.671 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:39.935 ==> default: Mounting SSHFS shared folder... 00:02:41.314 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:41.314 ==> default: Checking Mount.. 00:02:42.693 ==> default: Folder Successfully Mounted! 00:02:42.693 ==> default: Running provisioner: file... 00:02:43.631 default: ~/.gitconfig => .gitconfig 00:02:43.890 00:02:43.890 SUCCESS! 00:02:43.890 00:02:43.890 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:43.890 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:43.890 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:43.890 00:02:43.900 [Pipeline] } 00:02:43.918 [Pipeline] // stage 00:02:43.927 [Pipeline] dir 00:02:43.928 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:02:43.930 [Pipeline] { 00:02:43.942 [Pipeline] catchError 00:02:43.943 [Pipeline] { 00:02:43.954 [Pipeline] sh 00:02:44.234 + vagrant ssh-config --host vagrant 00:02:44.234 + sed -ne /^Host/,$p 00:02:44.234 + tee ssh_conf 00:02:47.518 Host vagrant 00:02:47.518 HostName 192.168.121.50 00:02:47.518 User vagrant 00:02:47.518 Port 22 00:02:47.518 UserKnownHostsFile /dev/null 00:02:47.518 StrictHostKeyChecking no 00:02:47.518 PasswordAuthentication no 00:02:47.518 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:47.518 IdentitiesOnly yes 00:02:47.518 LogLevel FATAL 00:02:47.518 ForwardAgent yes 00:02:47.518 ForwardX11 yes 00:02:47.518 00:02:47.534 [Pipeline] withEnv 00:02:47.536 [Pipeline] { 00:02:47.549 [Pipeline] sh 00:02:47.825 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:47.825 source /etc/os-release 00:02:47.825 [[ -e /image.version ]] && img=$(< /image.version) 00:02:47.825 # Minimal, systemd-like check. 00:02:47.825 if [[ -e /.dockerenv ]]; then 00:02:47.825 # Clear garbage from the node's name: 00:02:47.825 # agt-er_autotest_547-896 -> autotest_547-896 00:02:47.825 # $HOSTNAME is the actual container id 00:02:47.825 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:47.825 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:47.825 # We can assume this is a mount from a host where container is running, 00:02:47.825 # so fetch its hostname to easily identify the target swarm worker. 00:02:47.825 container="$(< /etc/hostname) ($agent)" 00:02:47.825 else 00:02:47.825 # Fallback 00:02:47.825 container=$agent 00:02:47.825 fi 00:02:47.825 fi 00:02:47.825 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:47.825 00:02:48.097 [Pipeline] } 00:02:48.113 [Pipeline] // withEnv 00:02:48.125 [Pipeline] setCustomBuildProperty 00:02:48.141 [Pipeline] stage 00:02:48.144 [Pipeline] { (Tests) 00:02:48.162 [Pipeline] sh 00:02:48.443 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:48.717 [Pipeline] sh 00:02:48.998 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:49.274 [Pipeline] timeout 00:02:49.274 Timeout set to expire in 1 hr 0 min 00:02:49.276 [Pipeline] { 00:02:49.292 [Pipeline] sh 00:02:49.574 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:50.143 HEAD is now at c13c99a5e test: Various fixes for Fedora40 00:02:50.155 [Pipeline] sh 00:02:50.436 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:50.732 [Pipeline] sh 00:02:51.013 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:51.291 [Pipeline] sh 00:02:51.578 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:02:51.837 ++ readlink -f spdk_repo 00:02:51.837 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:51.837 + [[ -n /home/vagrant/spdk_repo ]] 00:02:51.837 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:51.837 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:51.837 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:51.837 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:51.837 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:51.837 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:02:51.837 + cd /home/vagrant/spdk_repo 00:02:51.837 + source /etc/os-release 00:02:51.837 ++ NAME='Fedora Linux' 00:02:51.837 ++ VERSION='39 (Cloud Edition)' 00:02:51.837 ++ ID=fedora 00:02:51.837 ++ VERSION_ID=39 00:02:51.837 ++ VERSION_CODENAME= 00:02:51.837 ++ PLATFORM_ID=platform:f39 00:02:51.837 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:51.837 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:51.837 ++ LOGO=fedora-logo-icon 00:02:51.837 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:51.837 ++ HOME_URL=https://fedoraproject.org/ 00:02:51.837 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:51.837 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:51.837 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:51.837 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:51.837 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:51.837 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:51.837 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:51.837 ++ SUPPORT_END=2024-11-12 00:02:51.837 ++ VARIANT='Cloud Edition' 00:02:51.837 ++ VARIANT_ID=cloud 00:02:51.837 + uname -a 00:02:51.837 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:51.837 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:51.837 Hugepages 00:02:51.837 node hugesize free / total 00:02:51.837 node0 1048576kB 0 / 0 00:02:51.837 node0 2048kB 0 / 0 00:02:51.837 00:02:51.837 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:51.837 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:51.837 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:51.837 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:52.111 + rm -f /tmp/spdk-ld-path 00:02:52.111 + source autorun-spdk.conf 00:02:52.111 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:52.111 ++ SPDK_TEST_NVMF=1 00:02:52.111 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:52.111 ++ SPDK_TEST_URING=1 00:02:52.111 ++ SPDK_TEST_USDT=1 00:02:52.111 ++ SPDK_RUN_UBSAN=1 00:02:52.111 ++ NET_TYPE=virt 00:02:52.111 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:52.111 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:52.111 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:52.111 ++ RUN_NIGHTLY=1 00:02:52.111 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:52.111 + [[ -n '' ]] 00:02:52.111 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:52.111 + for M in /var/spdk/build-*-manifest.txt 00:02:52.111 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:52.111 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:52.111 + for M in /var/spdk/build-*-manifest.txt 00:02:52.111 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:52.111 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:52.111 + for M in /var/spdk/build-*-manifest.txt 00:02:52.111 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:52.111 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:52.111 ++ uname 00:02:52.111 + [[ Linux == \L\i\n\u\x ]] 00:02:52.111 + sudo dmesg -T 00:02:52.111 + sudo dmesg --clear 00:02:52.111 + dmesg_pid=5969 00:02:52.111 + [[ Fedora Linux == FreeBSD ]] 00:02:52.111 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:52.111 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:52.111 + sudo dmesg -Tw 00:02:52.111 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:52.111 + [[ -x /usr/src/fio-static/fio ]] 00:02:52.111 + export FIO_BIN=/usr/src/fio-static/fio 00:02:52.111 + FIO_BIN=/usr/src/fio-static/fio 00:02:52.111 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:52.111 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:52.111 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:52.111 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:52.111 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:52.111 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:52.111 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:52.111 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:52.111 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:52.111 Test configuration: 00:02:52.111 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:52.111 SPDK_TEST_NVMF=1 00:02:52.111 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:52.111 SPDK_TEST_URING=1 00:02:52.112 SPDK_TEST_USDT=1 00:02:52.112 SPDK_RUN_UBSAN=1 00:02:52.112 NET_TYPE=virt 00:02:52.112 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:52.112 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:52.112 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:52.112 RUN_NIGHTLY=1 05:01:41 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:02:52.112 05:01:41 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:52.112 05:01:41 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:52.112 05:01:41 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:52.112 05:01:41 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:52.112 05:01:41 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:52.112 05:01:41 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:52.112 05:01:41 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:52.112 05:01:41 -- paths/export.sh@5 -- $ export PATH 00:02:52.112 05:01:41 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:52.112 05:01:41 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:52.112 05:01:41 -- common/autobuild_common.sh@440 -- $ date +%s 00:02:52.112 05:01:41 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1733634101.XXXXXX 00:02:52.112 05:01:41 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1733634101.J4BgeH 00:02:52.112 05:01:41 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:02:52.112 05:01:41 -- common/autobuild_common.sh@446 -- $ '[' -n v23.11 ']' 00:02:52.112 05:01:41 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:52.112 05:01:41 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:52.112 05:01:41 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:52.112 05:01:41 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:52.112 05:01:41 -- common/autobuild_common.sh@456 -- $ get_config_params 00:02:52.112 05:01:41 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:02:52.112 05:01:41 -- common/autotest_common.sh@10 -- $ set +x 00:02:52.112 05:01:41 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:52.112 05:01:41 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:52.112 05:01:41 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:52.112 05:01:41 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:52.112 05:01:41 -- spdk/autobuild.sh@16 -- $ date -u 00:02:52.382 Sun Dec 8 05:01:41 AM UTC 2024 00:02:52.382 05:01:41 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:52.382 LTS-67-gc13c99a5e 00:02:52.382 05:01:41 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:52.382 05:01:41 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:52.382 05:01:41 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:52.382 05:01:41 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:52.382 05:01:41 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:52.382 05:01:41 -- common/autotest_common.sh@10 -- $ set +x 00:02:52.382 ************************************ 00:02:52.382 START TEST ubsan 00:02:52.382 ************************************ 00:02:52.382 using ubsan 00:02:52.382 05:01:41 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:02:52.382 00:02:52.382 real 0m0.001s 00:02:52.382 user 0m0.000s 00:02:52.382 sys 0m0.000s 00:02:52.382 05:01:41 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:52.382 ************************************ 00:02:52.382 05:01:41 -- common/autotest_common.sh@10 -- $ set +x 00:02:52.382 END TEST ubsan 00:02:52.382 ************************************ 00:02:52.382 05:01:41 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:02:52.382 05:01:41 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:52.382 05:01:41 -- common/autobuild_common.sh@432 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:52.382 05:01:41 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:02:52.382 05:01:41 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:52.382 05:01:41 -- common/autotest_common.sh@10 -- $ set +x 00:02:52.382 ************************************ 00:02:52.382 START TEST build_native_dpdk 00:02:52.382 ************************************ 00:02:52.382 05:01:41 -- common/autotest_common.sh@1114 -- $ _build_native_dpdk 00:02:52.382 05:01:41 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:52.382 05:01:41 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:52.382 05:01:41 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:52.382 05:01:41 -- common/autobuild_common.sh@51 -- $ local compiler 00:02:52.382 05:01:41 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:52.382 05:01:41 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:52.382 05:01:41 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:52.382 05:01:41 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:52.382 05:01:41 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:52.382 05:01:41 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:52.382 05:01:41 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:52.382 05:01:41 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:52.382 05:01:41 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:52.382 05:01:41 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:52.382 05:01:41 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:52.382 05:01:41 -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:52.382 05:01:41 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:52.382 05:01:41 -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:52.382 05:01:41 -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:52.382 05:01:41 -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:52.382 eeb0605f11 version: 23.11.0 00:02:52.382 238778122a doc: update release notes for 23.11 00:02:52.382 46aa6b3cfc doc: fix description of RSS features 00:02:52.382 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:52.382 7e421ae345 devtools: support skipping forbid rule check 00:02:52.382 05:01:41 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:52.382 05:01:41 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:52.382 05:01:41 -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:02:52.382 05:01:41 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:52.382 05:01:41 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:52.382 05:01:41 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:52.382 05:01:41 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:52.382 05:01:41 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:52.382 05:01:41 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:52.382 05:01:41 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:52.382 05:01:41 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:52.382 05:01:41 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:52.382 05:01:41 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:52.382 05:01:41 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:52.382 05:01:41 -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:52.382 05:01:41 -- common/autobuild_common.sh@168 -- $ uname -s 00:02:52.382 05:01:42 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:52.382 05:01:42 -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:02:52.382 05:01:42 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:02:52.382 05:01:42 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:52.382 05:01:42 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:52.382 05:01:42 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:52.382 05:01:42 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:52.382 05:01:42 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:52.382 05:01:42 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:52.382 05:01:42 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:52.382 05:01:42 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:52.382 05:01:42 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:52.382 05:01:42 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:52.382 05:01:42 -- scripts/common.sh@343 -- $ case "$op" in 00:02:52.382 05:01:42 -- scripts/common.sh@344 -- $ : 1 00:02:52.382 05:01:42 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:52.382 05:01:42 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:52.382 05:01:42 -- scripts/common.sh@364 -- $ decimal 23 00:02:52.382 05:01:42 -- scripts/common.sh@352 -- $ local d=23 00:02:52.382 05:01:42 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:52.382 05:01:42 -- scripts/common.sh@354 -- $ echo 23 00:02:52.382 05:01:42 -- scripts/common.sh@364 -- $ ver1[v]=23 00:02:52.382 05:01:42 -- scripts/common.sh@365 -- $ decimal 21 00:02:52.382 05:01:42 -- scripts/common.sh@352 -- $ local d=21 00:02:52.382 05:01:42 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:52.382 05:01:42 -- scripts/common.sh@354 -- $ echo 21 00:02:52.382 05:01:42 -- scripts/common.sh@365 -- $ ver2[v]=21 00:02:52.382 05:01:42 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:52.382 05:01:42 -- scripts/common.sh@366 -- $ return 1 00:02:52.382 05:01:42 -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:52.382 patching file config/rte_config.h 00:02:52.383 Hunk #1 succeeded at 60 (offset 1 line). 00:02:52.383 05:01:42 -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:02:52.383 05:01:42 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:02:52.383 05:01:42 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:52.383 05:01:42 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:52.383 05:01:42 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:52.383 05:01:42 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:52.383 05:01:42 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:52.383 05:01:42 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:52.383 05:01:42 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:52.383 05:01:42 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:52.383 05:01:42 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:52.383 05:01:42 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:52.383 05:01:42 -- scripts/common.sh@343 -- $ case "$op" in 00:02:52.383 05:01:42 -- scripts/common.sh@344 -- $ : 1 00:02:52.383 05:01:42 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:52.383 05:01:42 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:52.383 05:01:42 -- scripts/common.sh@364 -- $ decimal 23 00:02:52.383 05:01:42 -- scripts/common.sh@352 -- $ local d=23 00:02:52.383 05:01:42 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:52.383 05:01:42 -- scripts/common.sh@354 -- $ echo 23 00:02:52.383 05:01:42 -- scripts/common.sh@364 -- $ ver1[v]=23 00:02:52.383 05:01:42 -- scripts/common.sh@365 -- $ decimal 24 00:02:52.383 05:01:42 -- scripts/common.sh@352 -- $ local d=24 00:02:52.383 05:01:42 -- scripts/common.sh@353 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:52.383 05:01:42 -- scripts/common.sh@354 -- $ echo 24 00:02:52.383 05:01:42 -- scripts/common.sh@365 -- $ ver2[v]=24 00:02:52.383 05:01:42 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:52.383 05:01:42 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:02:52.383 05:01:42 -- scripts/common.sh@367 -- $ return 0 00:02:52.383 05:01:42 -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:52.383 patching file lib/pcapng/rte_pcapng.c 00:02:52.383 05:01:42 -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:02:52.383 05:01:42 -- common/autobuild_common.sh@181 -- $ uname -s 00:02:52.383 05:01:42 -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:02:52.383 05:01:42 -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:52.383 05:01:42 -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:57.657 The Meson build system 00:02:57.657 Version: 1.5.0 00:02:57.657 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:57.657 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:57.657 Build type: native build 00:02:57.657 Program cat found: YES (/usr/bin/cat) 00:02:57.657 Project name: DPDK 00:02:57.657 Project version: 23.11.0 00:02:57.657 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:57.657 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:57.657 Host machine cpu family: x86_64 00:02:57.657 Host machine cpu: x86_64 00:02:57.657 Message: ## Building in Developer Mode ## 00:02:57.657 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:57.657 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:57.657 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:57.657 Program python3 found: YES (/usr/bin/python3) 00:02:57.657 Program cat found: YES (/usr/bin/cat) 00:02:57.657 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:57.657 Compiler for C supports arguments -march=native: YES 00:02:57.657 Checking for size of "void *" : 8 00:02:57.657 Checking for size of "void *" : 8 (cached) 00:02:57.657 Library m found: YES 00:02:57.657 Library numa found: YES 00:02:57.657 Has header "numaif.h" : YES 00:02:57.657 Library fdt found: NO 00:02:57.657 Library execinfo found: NO 00:02:57.657 Has header "execinfo.h" : YES 00:02:57.657 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:57.657 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:57.657 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:57.657 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:57.657 Run-time dependency openssl found: YES 3.1.1 00:02:57.657 Run-time dependency libpcap found: YES 1.10.4 00:02:57.657 Has header "pcap.h" with dependency libpcap: YES 00:02:57.657 Compiler for C supports arguments -Wcast-qual: YES 00:02:57.657 Compiler for C supports arguments -Wdeprecated: YES 00:02:57.657 Compiler for C supports arguments -Wformat: YES 00:02:57.657 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:57.657 Compiler for C supports arguments -Wformat-security: NO 00:02:57.657 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:57.657 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:57.657 Compiler for C supports arguments -Wnested-externs: YES 00:02:57.657 Compiler for C supports arguments -Wold-style-definition: YES 00:02:57.657 Compiler for C supports arguments -Wpointer-arith: YES 00:02:57.657 Compiler for C supports arguments -Wsign-compare: YES 00:02:57.657 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:57.657 Compiler for C supports arguments -Wundef: YES 00:02:57.657 Compiler for C supports arguments -Wwrite-strings: YES 00:02:57.657 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:57.657 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:57.657 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:57.657 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:57.657 Program objdump found: YES (/usr/bin/objdump) 00:02:57.657 Compiler for C supports arguments -mavx512f: YES 00:02:57.657 Checking if "AVX512 checking" compiles: YES 00:02:57.657 Fetching value of define "__SSE4_2__" : 1 00:02:57.657 Fetching value of define "__AES__" : 1 00:02:57.657 Fetching value of define "__AVX__" : 1 00:02:57.657 Fetching value of define "__AVX2__" : 1 00:02:57.657 Fetching value of define "__AVX512BW__" : (undefined) 00:02:57.657 Fetching value of define "__AVX512CD__" : (undefined) 00:02:57.657 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:57.657 Fetching value of define "__AVX512F__" : (undefined) 00:02:57.657 Fetching value of define "__AVX512VL__" : (undefined) 00:02:57.657 Fetching value of define "__PCLMUL__" : 1 00:02:57.657 Fetching value of define "__RDRND__" : 1 00:02:57.657 Fetching value of define "__RDSEED__" : 1 00:02:57.657 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:57.657 Fetching value of define "__znver1__" : (undefined) 00:02:57.657 Fetching value of define "__znver2__" : (undefined) 00:02:57.657 Fetching value of define "__znver3__" : (undefined) 00:02:57.657 Fetching value of define "__znver4__" : (undefined) 00:02:57.657 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:57.657 Message: lib/log: Defining dependency "log" 00:02:57.657 Message: lib/kvargs: Defining dependency "kvargs" 00:02:57.657 Message: lib/telemetry: Defining dependency "telemetry" 00:02:57.657 Checking for function "getentropy" : NO 00:02:57.657 Message: lib/eal: Defining dependency "eal" 00:02:57.657 Message: lib/ring: Defining dependency "ring" 00:02:57.657 Message: lib/rcu: Defining dependency "rcu" 00:02:57.657 Message: lib/mempool: Defining dependency "mempool" 00:02:57.657 Message: lib/mbuf: Defining dependency "mbuf" 00:02:57.657 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:57.657 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:57.657 Compiler for C supports arguments -mpclmul: YES 00:02:57.657 Compiler for C supports arguments -maes: YES 00:02:57.657 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:57.657 Compiler for C supports arguments -mavx512bw: YES 00:02:57.658 Compiler for C supports arguments -mavx512dq: YES 00:02:57.658 Compiler for C supports arguments -mavx512vl: YES 00:02:57.658 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:57.658 Compiler for C supports arguments -mavx2: YES 00:02:57.658 Compiler for C supports arguments -mavx: YES 00:02:57.658 Message: lib/net: Defining dependency "net" 00:02:57.658 Message: lib/meter: Defining dependency "meter" 00:02:57.658 Message: lib/ethdev: Defining dependency "ethdev" 00:02:57.658 Message: lib/pci: Defining dependency "pci" 00:02:57.658 Message: lib/cmdline: Defining dependency "cmdline" 00:02:57.658 Message: lib/metrics: Defining dependency "metrics" 00:02:57.658 Message: lib/hash: Defining dependency "hash" 00:02:57.658 Message: lib/timer: Defining dependency "timer" 00:02:57.658 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:57.658 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:57.658 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:57.658 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:57.658 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:57.658 Message: lib/acl: Defining dependency "acl" 00:02:57.658 Message: lib/bbdev: Defining dependency "bbdev" 00:02:57.658 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:57.658 Run-time dependency libelf found: YES 0.191 00:02:57.658 Message: lib/bpf: Defining dependency "bpf" 00:02:57.658 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:57.658 Message: lib/compressdev: Defining dependency "compressdev" 00:02:57.658 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:57.658 Message: lib/distributor: Defining dependency "distributor" 00:02:57.658 Message: lib/dmadev: Defining dependency "dmadev" 00:02:57.658 Message: lib/efd: Defining dependency "efd" 00:02:57.658 Message: lib/eventdev: Defining dependency "eventdev" 00:02:57.658 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:57.658 Message: lib/gpudev: Defining dependency "gpudev" 00:02:57.658 Message: lib/gro: Defining dependency "gro" 00:02:57.658 Message: lib/gso: Defining dependency "gso" 00:02:57.658 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:57.658 Message: lib/jobstats: Defining dependency "jobstats" 00:02:57.658 Message: lib/latencystats: Defining dependency "latencystats" 00:02:57.658 Message: lib/lpm: Defining dependency "lpm" 00:02:57.658 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:57.658 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:57.658 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:57.658 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:57.658 Message: lib/member: Defining dependency "member" 00:02:57.658 Message: lib/pcapng: Defining dependency "pcapng" 00:02:57.658 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:57.658 Message: lib/power: Defining dependency "power" 00:02:57.658 Message: lib/rawdev: Defining dependency "rawdev" 00:02:57.658 Message: lib/regexdev: Defining dependency "regexdev" 00:02:57.658 Message: lib/mldev: Defining dependency "mldev" 00:02:57.658 Message: lib/rib: Defining dependency "rib" 00:02:57.658 Message: lib/reorder: Defining dependency "reorder" 00:02:57.658 Message: lib/sched: Defining dependency "sched" 00:02:57.658 Message: lib/security: Defining dependency "security" 00:02:57.658 Message: lib/stack: Defining dependency "stack" 00:02:57.658 Has header "linux/userfaultfd.h" : YES 00:02:57.658 Has header "linux/vduse.h" : YES 00:02:57.658 Message: lib/vhost: Defining dependency "vhost" 00:02:57.658 Message: lib/ipsec: Defining dependency "ipsec" 00:02:57.658 Message: lib/pdcp: Defining dependency "pdcp" 00:02:57.658 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:57.658 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:57.658 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:57.658 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:57.658 Message: lib/fib: Defining dependency "fib" 00:02:57.658 Message: lib/port: Defining dependency "port" 00:02:57.658 Message: lib/pdump: Defining dependency "pdump" 00:02:57.658 Message: lib/table: Defining dependency "table" 00:02:57.658 Message: lib/pipeline: Defining dependency "pipeline" 00:02:57.658 Message: lib/graph: Defining dependency "graph" 00:02:57.658 Message: lib/node: Defining dependency "node" 00:02:57.658 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:59.562 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:59.562 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:59.563 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:59.563 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:59.563 Compiler for C supports arguments -Wno-unused-value: YES 00:02:59.563 Compiler for C supports arguments -Wno-format: YES 00:02:59.563 Compiler for C supports arguments -Wno-format-security: YES 00:02:59.563 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:59.563 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:59.563 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:59.563 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:59.563 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:59.563 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:59.563 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:59.563 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:59.563 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:59.563 Has header "sys/epoll.h" : YES 00:02:59.563 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:59.563 Configuring doxy-api-html.conf using configuration 00:02:59.563 Configuring doxy-api-man.conf using configuration 00:02:59.563 Program mandb found: YES (/usr/bin/mandb) 00:02:59.563 Program sphinx-build found: NO 00:02:59.563 Configuring rte_build_config.h using configuration 00:02:59.563 Message: 00:02:59.563 ================= 00:02:59.563 Applications Enabled 00:02:59.563 ================= 00:02:59.563 00:02:59.563 apps: 00:02:59.563 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:59.563 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:59.563 test-pmd, test-regex, test-sad, test-security-perf, 00:02:59.563 00:02:59.563 Message: 00:02:59.563 ================= 00:02:59.563 Libraries Enabled 00:02:59.563 ================= 00:02:59.563 00:02:59.563 libs: 00:02:59.563 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:59.563 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:59.563 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:59.563 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:59.563 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:59.563 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:59.563 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:59.563 00:02:59.563 00:02:59.563 Message: 00:02:59.563 =============== 00:02:59.563 Drivers Enabled 00:02:59.563 =============== 00:02:59.563 00:02:59.563 common: 00:02:59.563 00:02:59.563 bus: 00:02:59.563 pci, vdev, 00:02:59.563 mempool: 00:02:59.563 ring, 00:02:59.563 dma: 00:02:59.563 00:02:59.563 net: 00:02:59.563 i40e, 00:02:59.563 raw: 00:02:59.563 00:02:59.563 crypto: 00:02:59.563 00:02:59.563 compress: 00:02:59.563 00:02:59.563 regex: 00:02:59.563 00:02:59.563 ml: 00:02:59.563 00:02:59.563 vdpa: 00:02:59.563 00:02:59.563 event: 00:02:59.563 00:02:59.563 baseband: 00:02:59.563 00:02:59.563 gpu: 00:02:59.563 00:02:59.563 00:02:59.563 Message: 00:02:59.563 ================= 00:02:59.563 Content Skipped 00:02:59.563 ================= 00:02:59.563 00:02:59.563 apps: 00:02:59.563 00:02:59.563 libs: 00:02:59.563 00:02:59.563 drivers: 00:02:59.563 common/cpt: not in enabled drivers build config 00:02:59.563 common/dpaax: not in enabled drivers build config 00:02:59.563 common/iavf: not in enabled drivers build config 00:02:59.563 common/idpf: not in enabled drivers build config 00:02:59.563 common/mvep: not in enabled drivers build config 00:02:59.563 common/octeontx: not in enabled drivers build config 00:02:59.563 bus/auxiliary: not in enabled drivers build config 00:02:59.563 bus/cdx: not in enabled drivers build config 00:02:59.563 bus/dpaa: not in enabled drivers build config 00:02:59.563 bus/fslmc: not in enabled drivers build config 00:02:59.563 bus/ifpga: not in enabled drivers build config 00:02:59.563 bus/platform: not in enabled drivers build config 00:02:59.563 bus/vmbus: not in enabled drivers build config 00:02:59.563 common/cnxk: not in enabled drivers build config 00:02:59.563 common/mlx5: not in enabled drivers build config 00:02:59.563 common/nfp: not in enabled drivers build config 00:02:59.563 common/qat: not in enabled drivers build config 00:02:59.563 common/sfc_efx: not in enabled drivers build config 00:02:59.563 mempool/bucket: not in enabled drivers build config 00:02:59.563 mempool/cnxk: not in enabled drivers build config 00:02:59.563 mempool/dpaa: not in enabled drivers build config 00:02:59.563 mempool/dpaa2: not in enabled drivers build config 00:02:59.563 mempool/octeontx: not in enabled drivers build config 00:02:59.563 mempool/stack: not in enabled drivers build config 00:02:59.563 dma/cnxk: not in enabled drivers build config 00:02:59.563 dma/dpaa: not in enabled drivers build config 00:02:59.563 dma/dpaa2: not in enabled drivers build config 00:02:59.563 dma/hisilicon: not in enabled drivers build config 00:02:59.563 dma/idxd: not in enabled drivers build config 00:02:59.563 dma/ioat: not in enabled drivers build config 00:02:59.563 dma/skeleton: not in enabled drivers build config 00:02:59.563 net/af_packet: not in enabled drivers build config 00:02:59.563 net/af_xdp: not in enabled drivers build config 00:02:59.563 net/ark: not in enabled drivers build config 00:02:59.563 net/atlantic: not in enabled drivers build config 00:02:59.563 net/avp: not in enabled drivers build config 00:02:59.563 net/axgbe: not in enabled drivers build config 00:02:59.563 net/bnx2x: not in enabled drivers build config 00:02:59.563 net/bnxt: not in enabled drivers build config 00:02:59.563 net/bonding: not in enabled drivers build config 00:02:59.563 net/cnxk: not in enabled drivers build config 00:02:59.563 net/cpfl: not in enabled drivers build config 00:02:59.563 net/cxgbe: not in enabled drivers build config 00:02:59.563 net/dpaa: not in enabled drivers build config 00:02:59.563 net/dpaa2: not in enabled drivers build config 00:02:59.563 net/e1000: not in enabled drivers build config 00:02:59.563 net/ena: not in enabled drivers build config 00:02:59.563 net/enetc: not in enabled drivers build config 00:02:59.563 net/enetfec: not in enabled drivers build config 00:02:59.563 net/enic: not in enabled drivers build config 00:02:59.563 net/failsafe: not in enabled drivers build config 00:02:59.563 net/fm10k: not in enabled drivers build config 00:02:59.563 net/gve: not in enabled drivers build config 00:02:59.563 net/hinic: not in enabled drivers build config 00:02:59.563 net/hns3: not in enabled drivers build config 00:02:59.563 net/iavf: not in enabled drivers build config 00:02:59.563 net/ice: not in enabled drivers build config 00:02:59.563 net/idpf: not in enabled drivers build config 00:02:59.563 net/igc: not in enabled drivers build config 00:02:59.563 net/ionic: not in enabled drivers build config 00:02:59.563 net/ipn3ke: not in enabled drivers build config 00:02:59.563 net/ixgbe: not in enabled drivers build config 00:02:59.563 net/mana: not in enabled drivers build config 00:02:59.563 net/memif: not in enabled drivers build config 00:02:59.563 net/mlx4: not in enabled drivers build config 00:02:59.563 net/mlx5: not in enabled drivers build config 00:02:59.563 net/mvneta: not in enabled drivers build config 00:02:59.563 net/mvpp2: not in enabled drivers build config 00:02:59.563 net/netvsc: not in enabled drivers build config 00:02:59.563 net/nfb: not in enabled drivers build config 00:02:59.563 net/nfp: not in enabled drivers build config 00:02:59.563 net/ngbe: not in enabled drivers build config 00:02:59.563 net/null: not in enabled drivers build config 00:02:59.563 net/octeontx: not in enabled drivers build config 00:02:59.563 net/octeon_ep: not in enabled drivers build config 00:02:59.563 net/pcap: not in enabled drivers build config 00:02:59.563 net/pfe: not in enabled drivers build config 00:02:59.563 net/qede: not in enabled drivers build config 00:02:59.563 net/ring: not in enabled drivers build config 00:02:59.563 net/sfc: not in enabled drivers build config 00:02:59.563 net/softnic: not in enabled drivers build config 00:02:59.563 net/tap: not in enabled drivers build config 00:02:59.563 net/thunderx: not in enabled drivers build config 00:02:59.563 net/txgbe: not in enabled drivers build config 00:02:59.563 net/vdev_netvsc: not in enabled drivers build config 00:02:59.563 net/vhost: not in enabled drivers build config 00:02:59.563 net/virtio: not in enabled drivers build config 00:02:59.563 net/vmxnet3: not in enabled drivers build config 00:02:59.563 raw/cnxk_bphy: not in enabled drivers build config 00:02:59.563 raw/cnxk_gpio: not in enabled drivers build config 00:02:59.563 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:59.563 raw/ifpga: not in enabled drivers build config 00:02:59.563 raw/ntb: not in enabled drivers build config 00:02:59.563 raw/skeleton: not in enabled drivers build config 00:02:59.563 crypto/armv8: not in enabled drivers build config 00:02:59.563 crypto/bcmfs: not in enabled drivers build config 00:02:59.563 crypto/caam_jr: not in enabled drivers build config 00:02:59.563 crypto/ccp: not in enabled drivers build config 00:02:59.563 crypto/cnxk: not in enabled drivers build config 00:02:59.563 crypto/dpaa_sec: not in enabled drivers build config 00:02:59.563 crypto/dpaa2_sec: not in enabled drivers build config 00:02:59.563 crypto/ipsec_mb: not in enabled drivers build config 00:02:59.563 crypto/mlx5: not in enabled drivers build config 00:02:59.563 crypto/mvsam: not in enabled drivers build config 00:02:59.563 crypto/nitrox: not in enabled drivers build config 00:02:59.563 crypto/null: not in enabled drivers build config 00:02:59.563 crypto/octeontx: not in enabled drivers build config 00:02:59.563 crypto/openssl: not in enabled drivers build config 00:02:59.563 crypto/scheduler: not in enabled drivers build config 00:02:59.563 crypto/uadk: not in enabled drivers build config 00:02:59.563 crypto/virtio: not in enabled drivers build config 00:02:59.563 compress/isal: not in enabled drivers build config 00:02:59.564 compress/mlx5: not in enabled drivers build config 00:02:59.564 compress/octeontx: not in enabled drivers build config 00:02:59.564 compress/zlib: not in enabled drivers build config 00:02:59.564 regex/mlx5: not in enabled drivers build config 00:02:59.564 regex/cn9k: not in enabled drivers build config 00:02:59.564 ml/cnxk: not in enabled drivers build config 00:02:59.564 vdpa/ifc: not in enabled drivers build config 00:02:59.564 vdpa/mlx5: not in enabled drivers build config 00:02:59.564 vdpa/nfp: not in enabled drivers build config 00:02:59.564 vdpa/sfc: not in enabled drivers build config 00:02:59.564 event/cnxk: not in enabled drivers build config 00:02:59.564 event/dlb2: not in enabled drivers build config 00:02:59.564 event/dpaa: not in enabled drivers build config 00:02:59.564 event/dpaa2: not in enabled drivers build config 00:02:59.564 event/dsw: not in enabled drivers build config 00:02:59.564 event/opdl: not in enabled drivers build config 00:02:59.564 event/skeleton: not in enabled drivers build config 00:02:59.564 event/sw: not in enabled drivers build config 00:02:59.564 event/octeontx: not in enabled drivers build config 00:02:59.564 baseband/acc: not in enabled drivers build config 00:02:59.564 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:59.564 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:59.564 baseband/la12xx: not in enabled drivers build config 00:02:59.564 baseband/null: not in enabled drivers build config 00:02:59.564 baseband/turbo_sw: not in enabled drivers build config 00:02:59.564 gpu/cuda: not in enabled drivers build config 00:02:59.564 00:02:59.564 00:02:59.564 Build targets in project: 220 00:02:59.564 00:02:59.564 DPDK 23.11.0 00:02:59.564 00:02:59.564 User defined options 00:02:59.564 libdir : lib 00:02:59.564 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:59.564 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:59.564 c_link_args : 00:02:59.564 enable_docs : false 00:02:59.564 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:59.564 enable_kmods : false 00:02:59.564 machine : native 00:02:59.564 tests : false 00:02:59.564 00:02:59.564 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:59.564 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:59.564 05:01:49 -- common/autobuild_common.sh@189 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:59.564 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:59.564 [1/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:59.564 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:59.823 [3/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:59.823 [4/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:59.823 [5/710] Linking static target lib/librte_kvargs.a 00:02:59.823 [6/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:59.823 [7/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:59.823 [8/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:59.823 [9/710] Linking static target lib/librte_log.a 00:02:59.823 [10/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:00.081 [11/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.081 [12/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:00.340 [13/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.340 [14/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:00.340 [15/710] Linking target lib/librte_log.so.24.0 00:03:00.340 [16/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:00.340 [17/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:00.340 [18/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:00.598 [19/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:00.598 [20/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:00.598 [21/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:03:00.598 [22/710] Linking target lib/librte_kvargs.so.24.0 00:03:00.598 [23/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:00.857 [24/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:00.857 [25/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:03:00.857 [26/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:00.857 [27/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:00.857 [28/710] Linking static target lib/librte_telemetry.a 00:03:01.116 [29/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:01.116 [30/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:01.116 [31/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:01.116 [32/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:01.375 [33/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:01.375 [34/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.375 [35/710] Linking target lib/librte_telemetry.so.24.0 00:03:01.375 [36/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:01.375 [37/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:01.375 [38/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:01.375 [39/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:01.375 [40/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:03:01.375 [41/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:01.634 [42/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:01.634 [43/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:01.634 [44/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:01.634 [45/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:01.894 [46/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:01.894 [47/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:02.153 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:02.153 [49/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:02.153 [50/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:02.153 [51/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:02.153 [52/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:02.153 [53/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:02.153 [54/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:02.412 [55/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:02.412 [56/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:02.412 [57/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:02.412 [58/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:02.672 [59/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:02.672 [60/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:02.672 [61/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:02.672 [62/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:02.672 [63/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:02.931 [64/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:02.931 [65/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:02.931 [66/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:02.931 [67/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:02.931 [68/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:03.190 [69/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:03.190 [70/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:03.190 [71/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:03.190 [72/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:03.190 [73/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:03.449 [74/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:03.449 [75/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:03.449 [76/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:03.449 [77/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:03.449 [78/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:03.707 [79/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:03.707 [80/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:03.966 [81/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:03.966 [82/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:03.966 [83/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:03.966 [84/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:03.966 [85/710] Linking static target lib/librte_ring.a 00:03:04.225 [86/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:04.225 [87/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:04.225 [88/710] Linking static target lib/librte_eal.a 00:03:04.225 [89/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.225 [90/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:04.483 [91/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:04.483 [92/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:04.483 [93/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:04.483 [94/710] Linking static target lib/librte_mempool.a 00:03:04.483 [95/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:04.742 [96/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:04.742 [97/710] Linking static target lib/librte_rcu.a 00:03:04.742 [98/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:04.742 [99/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:05.000 [100/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:05.000 [101/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.000 [102/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:05.000 [103/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.257 [104/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:05.257 [105/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:05.257 [106/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:05.257 [107/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:05.257 [108/710] Linking static target lib/librte_mbuf.a 00:03:05.518 [109/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:05.518 [110/710] Linking static target lib/librte_net.a 00:03:05.518 [111/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:05.518 [112/710] Linking static target lib/librte_meter.a 00:03:05.776 [113/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.776 [114/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:05.776 [115/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.776 [116/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:05.776 [117/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:05.776 [118/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.776 [119/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:06.710 [120/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:06.710 [121/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:06.710 [122/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:06.969 [123/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:06.969 [124/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:06.969 [125/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:06.969 [126/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:06.969 [127/710] Linking static target lib/librte_pci.a 00:03:06.969 [128/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:07.228 [129/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:07.228 [130/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:07.228 [131/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.228 [132/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:07.228 [133/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:07.228 [134/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:07.228 [135/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:07.487 [136/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:07.487 [137/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:07.487 [138/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:07.487 [139/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:07.487 [140/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:07.487 [141/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:07.487 [142/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:07.747 [143/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:07.747 [144/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:07.747 [145/710] Linking static target lib/librte_cmdline.a 00:03:08.007 [146/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:03:08.007 [147/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:03:08.007 [148/710] Linking static target lib/librte_metrics.a 00:03:08.007 [149/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:08.267 [150/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:08.526 [151/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.526 [152/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.785 [153/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:08.786 [154/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:08.786 [155/710] Linking static target lib/librte_timer.a 00:03:09.045 [156/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.304 [157/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:03:09.304 [158/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:03:09.563 [159/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:03:09.563 [160/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:03:10.132 [161/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:03:10.132 [162/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:03:10.132 [163/710] Linking static target lib/librte_bitratestats.a 00:03:10.132 [164/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:10.132 [165/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:03:10.132 [166/710] Linking static target lib/librte_ethdev.a 00:03:10.132 [167/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.391 [168/710] Linking target lib/librte_eal.so.24.0 00:03:10.391 [169/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.391 [170/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:03:10.391 [171/710] Linking static target lib/librte_bbdev.a 00:03:10.391 [172/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:10.391 [173/710] Linking static target lib/librte_hash.a 00:03:10.391 [174/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:03:10.391 [175/710] Linking target lib/librte_ring.so.24.0 00:03:10.649 [176/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:03:10.649 [177/710] Linking target lib/librte_rcu.so.24.0 00:03:10.649 [178/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:03:10.908 [179/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:03:10.908 [180/710] Linking target lib/librte_mempool.so.24.0 00:03:10.908 [181/710] Linking target lib/librte_meter.so.24.0 00:03:10.908 [182/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:03:10.908 [183/710] Linking target lib/librte_pci.so.24.0 00:03:10.908 [184/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:03:10.908 [185/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:03:10.908 [186/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:03:10.908 [187/710] Linking static target lib/acl/libavx2_tmp.a 00:03:10.908 [188/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.908 [189/710] Linking target lib/librte_mbuf.so.24.0 00:03:10.908 [190/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.908 [191/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:03:10.908 [192/710] Linking target lib/librte_timer.so.24.0 00:03:11.167 [193/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:03:11.167 [194/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:03:11.167 [195/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:03:11.167 [196/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:03:11.167 [197/710] Linking static target lib/acl/libavx512_tmp.a 00:03:11.167 [198/710] Linking target lib/librte_net.so.24.0 00:03:11.167 [199/710] Linking target lib/librte_bbdev.so.24.0 00:03:11.426 [200/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:03:11.426 [201/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:03:11.426 [202/710] Linking target lib/librte_cmdline.so.24.0 00:03:11.426 [203/710] Linking static target lib/librte_acl.a 00:03:11.426 [204/710] Linking target lib/librte_hash.so.24.0 00:03:11.426 [205/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:03:11.426 [206/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:03:11.426 [207/710] Linking static target lib/librte_cfgfile.a 00:03:11.685 [208/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:03:11.685 [209/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.685 [210/710] Linking target lib/librte_acl.so.24.0 00:03:11.685 [211/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:03:11.943 [212/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.943 [213/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:03:11.943 [214/710] Linking target lib/librte_cfgfile.so.24.0 00:03:11.943 [215/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:03:11.943 [216/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:03:12.202 [217/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:12.202 [218/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:03:12.202 [219/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:12.461 [220/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:03:12.461 [221/710] Linking static target lib/librte_bpf.a 00:03:12.461 [222/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:12.461 [223/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:12.461 [224/710] Linking static target lib/librte_compressdev.a 00:03:12.721 [225/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.721 [226/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:12.721 [227/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:03:12.980 [228/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:03:12.980 [229/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:03:12.980 [230/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.980 [231/710] Linking static target lib/librte_distributor.a 00:03:12.980 [232/710] Linking target lib/librte_compressdev.so.24.0 00:03:12.980 [233/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:13.239 [234/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.239 [235/710] Linking target lib/librte_distributor.so.24.0 00:03:13.239 [236/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:13.239 [237/710] Linking static target lib/librte_dmadev.a 00:03:13.499 [238/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:03:13.758 [239/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.758 [240/710] Linking target lib/librte_dmadev.so.24.0 00:03:13.758 [241/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:03:13.758 [242/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:03:14.016 [243/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:03:14.275 [244/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:03:14.275 [245/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:03:14.275 [246/710] Linking static target lib/librte_efd.a 00:03:14.535 [247/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:14.535 [248/710] Linking static target lib/librte_cryptodev.a 00:03:14.535 [249/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:03:14.535 [250/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.535 [251/710] Linking target lib/librte_efd.so.24.0 00:03:14.793 [252/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:03:14.793 [253/710] Linking static target lib/librte_dispatcher.a 00:03:14.793 [254/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:03:15.052 [255/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.052 [256/710] Linking target lib/librte_ethdev.so.24.0 00:03:15.311 [257/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:03:15.311 [258/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:03:15.311 [259/710] Linking target lib/librte_metrics.so.24.0 00:03:15.311 [260/710] Linking target lib/librte_bpf.so.24.0 00:03:15.311 [261/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:03:15.311 [262/710] Linking static target lib/librte_gpudev.a 00:03:15.311 [263/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.311 [264/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:03:15.311 [265/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:03:15.312 [266/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:03:15.312 [267/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:03:15.570 [268/710] Linking target lib/librte_bitratestats.so.24.0 00:03:15.570 [269/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:03:15.828 [270/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.828 [271/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:03:15.828 [272/710] Linking target lib/librte_cryptodev.so.24.0 00:03:15.828 [273/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:03:16.087 [274/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:03:16.087 [275/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:03:16.087 [276/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.087 [277/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:03:16.345 [278/710] Linking target lib/librte_gpudev.so.24.0 00:03:16.345 [279/710] Linking static target lib/librte_eventdev.a 00:03:16.346 [280/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:03:16.346 [281/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:03:16.346 [282/710] Linking static target lib/librte_gro.a 00:03:16.346 [283/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:03:16.346 [284/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:03:16.346 [285/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:03:16.605 [286/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.605 [287/710] Linking target lib/librte_gro.so.24.0 00:03:16.605 [288/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:03:16.605 [289/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:03:16.864 [290/710] Linking static target lib/librte_gso.a 00:03:16.864 [291/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.864 [292/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:03:16.864 [293/710] Linking target lib/librte_gso.so.24.0 00:03:17.123 [294/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:03:17.123 [295/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:03:17.123 [296/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:03:17.123 [297/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:03:17.123 [298/710] Linking static target lib/librte_jobstats.a 00:03:17.123 [299/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:03:17.382 [300/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:03:17.382 [301/710] Linking static target lib/librte_ip_frag.a 00:03:17.382 [302/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:03:17.382 [303/710] Linking static target lib/librte_latencystats.a 00:03:17.382 [304/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.641 [305/710] Linking target lib/librte_jobstats.so.24.0 00:03:17.641 [306/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.641 [307/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.641 [308/710] Linking target lib/librte_ip_frag.so.24.0 00:03:17.641 [309/710] Linking target lib/librte_latencystats.so.24.0 00:03:17.641 [310/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:03:17.641 [311/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:03:17.641 [312/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:03:17.901 [313/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:03:17.901 [314/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:03:17.901 [315/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:17.901 [316/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:17.901 [317/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:18.161 [318/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.161 [319/710] Linking target lib/librte_eventdev.so.24.0 00:03:18.421 [320/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:03:18.421 [321/710] Linking static target lib/librte_lpm.a 00:03:18.421 [322/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:03:18.421 [323/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:03:18.421 [324/710] Linking target lib/librte_dispatcher.so.24.0 00:03:18.421 [325/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:18.680 [326/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:18.680 [327/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:03:18.680 [328/710] Linking static target lib/librte_pcapng.a 00:03:18.680 [329/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:18.680 [330/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.680 [331/710] Linking target lib/librte_lpm.so.24.0 00:03:18.680 [332/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:03:18.680 [333/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:18.940 [334/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:03:18.940 [335/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.940 [336/710] Linking target lib/librte_pcapng.so.24.0 00:03:18.940 [337/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:03:18.940 [338/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:19.200 [339/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:19.200 [340/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:03:19.460 [341/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:19.460 [342/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:19.460 [343/710] Linking static target lib/librte_power.a 00:03:19.460 [344/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:03:19.460 [345/710] Linking static target lib/librte_member.a 00:03:19.460 [346/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:03:19.460 [347/710] Linking static target lib/librte_regexdev.a 00:03:19.460 [348/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:03:19.460 [349/710] Linking static target lib/librte_rawdev.a 00:03:19.719 [350/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:03:19.719 [351/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:03:19.719 [352/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:03:19.719 [353/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.979 [354/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:03:19.979 [355/710] Linking target lib/librte_member.so.24.0 00:03:19.979 [356/710] Linking static target lib/librte_mldev.a 00:03:19.979 [357/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.979 [358/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:03:19.979 [359/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.979 [360/710] Linking target lib/librte_power.so.24.0 00:03:19.979 [361/710] Linking target lib/librte_rawdev.so.24.0 00:03:20.238 [362/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:03:20.238 [363/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.238 [364/710] Linking target lib/librte_regexdev.so.24.0 00:03:20.238 [365/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:03:20.497 [366/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:20.497 [367/710] Linking static target lib/librte_reorder.a 00:03:20.497 [368/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:20.497 [369/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:03:20.497 [370/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:03:20.497 [371/710] Linking static target lib/librte_rib.a 00:03:20.756 [372/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:03:20.756 [373/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:03:20.756 [374/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.756 [375/710] Linking target lib/librte_reorder.so.24.0 00:03:20.756 [376/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:03:20.756 [377/710] Linking static target lib/librte_stack.a 00:03:21.015 [378/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:21.015 [379/710] Linking static target lib/librte_security.a 00:03:21.015 [380/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:03:21.015 [381/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.015 [382/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.015 [383/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.015 [384/710] Linking target lib/librte_rib.so.24.0 00:03:21.015 [385/710] Linking target lib/librte_stack.so.24.0 00:03:21.015 [386/710] Linking target lib/librte_mldev.so.24.0 00:03:21.274 [387/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:03:21.274 [388/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.274 [389/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:21.274 [390/710] Linking target lib/librte_security.so.24.0 00:03:21.533 [391/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:21.533 [392/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:03:21.533 [393/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:21.793 [394/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:21.793 [395/710] Linking static target lib/librte_sched.a 00:03:22.074 [396/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:22.074 [397/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.074 [398/710] Linking target lib/librte_sched.so.24.0 00:03:22.074 [399/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:22.350 [400/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:03:22.350 [401/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:22.350 [402/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:22.623 [403/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:22.623 [404/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:22.883 [405/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:22.883 [406/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:03:23.142 [407/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:03:23.401 [408/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:23.401 [409/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:03:23.401 [410/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:23.401 [411/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:03:23.401 [412/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:23.401 [413/710] Linking static target lib/librte_ipsec.a 00:03:23.969 [414/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:03:23.969 [415/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:03:23.969 [416/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.969 [417/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:03:23.969 [418/710] Linking target lib/librte_ipsec.so.24.0 00:03:23.969 [419/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:23.969 [420/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:23.969 [421/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:03:23.969 [422/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:03:23.969 [423/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:03:24.903 [424/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:24.903 [425/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:24.903 [426/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:03:24.903 [427/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:24.903 [428/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:24.903 [429/710] Linking static target lib/librte_pdcp.a 00:03:24.903 [430/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:24.903 [431/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:24.903 [432/710] Linking static target lib/librte_fib.a 00:03:25.162 [433/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.162 [434/710] Linking target lib/librte_pdcp.so.24.0 00:03:25.162 [435/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.419 [436/710] Linking target lib/librte_fib.so.24.0 00:03:25.419 [437/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:25.983 [438/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:25.983 [439/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:25.983 [440/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:25.983 [441/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:26.257 [442/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:26.257 [443/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:26.257 [444/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:26.515 [445/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:26.515 [446/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:26.515 [447/710] Linking static target lib/librte_port.a 00:03:26.773 [448/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:26.773 [449/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:26.774 [450/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:26.774 [451/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:27.032 [452/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:27.032 [453/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.032 [454/710] Linking target lib/librte_port.so.24.0 00:03:27.032 [455/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:27.032 [456/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:27.032 [457/710] Linking static target lib/librte_pdump.a 00:03:27.032 [458/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:27.291 [459/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:03:27.291 [460/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.291 [461/710] Linking target lib/librte_pdump.so.24.0 00:03:27.550 [462/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:27.809 [463/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:27.809 [464/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:28.068 [465/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:28.068 [466/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:28.068 [467/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:28.068 [468/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:28.327 [469/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:28.327 [470/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:28.327 [471/710] Linking static target lib/librte_table.a 00:03:28.587 [472/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:28.587 [473/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:29.156 [474/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.156 [475/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:29.156 [476/710] Linking target lib/librte_table.so.24.0 00:03:29.156 [477/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:29.156 [478/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:03:29.415 [479/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:29.415 [480/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:03:29.674 [481/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:29.933 [482/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:29.933 [483/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:29.933 [484/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:29.933 [485/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:03:30.193 [486/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:03:30.452 [487/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:30.452 [488/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:30.712 [489/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:03:30.712 [490/710] Linking static target lib/librte_graph.a 00:03:30.712 [491/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:30.712 [492/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:30.971 [493/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:03:31.231 [494/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:31.490 [495/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:03:31.490 [496/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:31.490 [497/710] Linking target lib/librte_graph.so.24.0 00:03:31.490 [498/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:03:31.490 [499/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:31.750 [500/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:03:32.009 [501/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:32.009 [502/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:32.009 [503/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:03:32.009 [504/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:03:32.009 [505/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:32.268 [506/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:03:32.527 [507/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:32.527 [508/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:32.787 [509/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:32.787 [510/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:32.787 [511/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:32.787 [512/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:32.787 [513/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:03:32.787 [514/710] Linking static target lib/librte_node.a 00:03:33.047 [515/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:33.307 [516/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.307 [517/710] Linking target lib/librte_node.so.24.0 00:03:33.307 [518/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:33.307 [519/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:33.307 [520/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:33.307 [521/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:33.566 [522/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:33.566 [523/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:33.566 [524/710] Linking static target drivers/librte_bus_vdev.a 00:03:33.566 [525/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:33.566 [526/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:33.566 [527/710] Linking static target drivers/librte_bus_pci.a 00:03:33.826 [528/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.826 [529/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:33.826 [530/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:33.826 [531/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:33.826 [532/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:33.826 [533/710] Linking target drivers/librte_bus_vdev.so.24.0 00:03:33.826 [534/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:34.086 [535/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:03:34.086 [536/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:34.086 [537/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:34.086 [538/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.086 [539/710] Linking target drivers/librte_bus_pci.so.24.0 00:03:34.345 [540/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:03:34.345 [541/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:34.345 [542/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:34.345 [543/710] Linking static target drivers/librte_mempool_ring.a 00:03:34.345 [544/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:34.345 [545/710] Linking target drivers/librte_mempool_ring.so.24.0 00:03:34.605 [546/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:34.864 [547/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:35.124 [548/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:35.124 [549/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:35.124 [550/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:35.124 [551/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:36.059 [552/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:36.059 [553/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:36.059 [554/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:36.059 [555/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:36.059 [556/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:03:36.059 [557/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:03:36.626 [558/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:36.884 [559/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:36.884 [560/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:36.884 [561/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:03:37.144 [562/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:37.403 [563/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:03:37.662 [564/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:03:37.662 [565/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:03:37.922 [566/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:37.922 [567/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:38.490 [568/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:38.490 [569/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:03:38.490 [570/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:03:38.490 [571/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:38.490 [572/710] Linking static target lib/librte_vhost.a 00:03:38.490 [573/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:03:38.490 [574/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:03:38.490 [575/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:38.748 [576/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:03:39.007 [577/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:03:39.007 [578/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:03:39.007 [579/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:03:39.007 [580/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:03:39.266 [581/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:39.266 [582/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:39.524 [583/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.524 [584/710] Linking target lib/librte_vhost.so.24.0 00:03:39.524 [585/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:39.524 [586/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:39.524 [587/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:39.524 [588/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:39.524 [589/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:39.783 [590/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:39.783 [591/710] Linking static target drivers/librte_net_i40e.a 00:03:39.783 [592/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:39.783 [593/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:40.042 [594/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:40.301 [595/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:40.301 [596/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:40.301 [597/710] Linking target drivers/librte_net_i40e.so.24.0 00:03:40.301 [598/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:40.301 [599/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:40.867 [600/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:40.867 [601/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:40.867 [602/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:41.125 [603/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:41.125 [604/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:41.125 [605/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:41.385 [606/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:41.385 [607/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:41.643 [608/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:41.901 [609/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:41.901 [610/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:41.901 [611/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:42.159 [612/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:42.159 [613/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:42.159 [614/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:42.159 [615/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:42.159 [616/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:42.159 [617/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:42.725 [618/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:42.725 [619/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:42.725 [620/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:42.983 [621/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:42.983 [622/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:42.983 [623/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:43.549 [624/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:43.549 [625/710] Linking static target lib/librte_pipeline.a 00:03:43.808 [626/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:44.068 [627/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:44.068 [628/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:44.068 [629/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:44.327 [630/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:44.327 [631/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:44.327 [632/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:44.327 [633/710] Linking target app/dpdk-dumpcap 00:03:44.586 [634/710] Linking target app/dpdk-graph 00:03:44.586 [635/710] Linking target app/dpdk-pdump 00:03:44.586 [636/710] Linking target app/dpdk-proc-info 00:03:44.586 [637/710] Linking target app/dpdk-test-acl 00:03:44.845 [638/710] Linking target app/dpdk-test-cmdline 00:03:44.845 [639/710] Linking target app/dpdk-test-compress-perf 00:03:44.845 [640/710] Linking target app/dpdk-test-crypto-perf 00:03:44.845 [641/710] Linking target app/dpdk-test-dma-perf 00:03:45.105 [642/710] Linking target app/dpdk-test-fib 00:03:45.105 [643/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:45.105 [644/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:45.365 [645/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:45.365 [646/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:45.365 [647/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:45.365 [648/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:45.623 [649/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:45.623 [650/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:45.881 [651/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:45.881 [652/710] Linking target app/dpdk-test-gpudev 00:03:45.881 [653/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:46.140 [654/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:46.140 [655/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:46.140 [656/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:46.140 [657/710] Linking target app/dpdk-test-eventdev 00:03:46.400 [658/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:46.400 [659/710] Linking target lib/librte_pipeline.so.24.0 00:03:46.400 [660/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:46.659 [661/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:46.659 [662/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:46.659 [663/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:46.659 [664/710] Linking target app/dpdk-test-flow-perf 00:03:46.659 [665/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:46.919 [666/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:46.919 [667/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:46.919 [668/710] Linking target app/dpdk-test-bbdev 00:03:46.919 [669/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:47.178 [670/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:47.178 [671/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:47.178 [672/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:47.178 [673/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:47.799 [674/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:47.799 [675/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:47.799 [676/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:47.799 [677/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:48.067 [678/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:48.326 [679/710] Linking target app/dpdk-test-pipeline 00:03:48.326 [680/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:48.326 [681/710] Linking target app/dpdk-test-mldev 00:03:48.326 [682/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:48.584 [683/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:48.843 [684/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:48.843 [685/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:49.103 [686/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:49.103 [687/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:49.103 [688/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:49.362 [689/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:49.362 [690/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:49.620 [691/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:49.620 [692/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:49.620 [693/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:50.188 [694/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:50.188 [695/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:50.446 [696/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:50.447 [697/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:50.706 [698/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:50.965 [699/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:50.965 [700/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:50.965 [701/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:50.965 [702/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:50.965 [703/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:51.223 [704/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:51.223 [705/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:51.223 [706/710] Linking target app/dpdk-test-regex 00:03:51.223 [707/710] Linking target app/dpdk-test-sad 00:03:51.483 [708/710] Linking target app/dpdk-testpmd 00:03:51.742 [709/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:52.001 [710/710] Linking target app/dpdk-test-security-perf 00:03:52.001 05:02:41 -- common/autobuild_common.sh@190 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:52.001 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:52.001 [0/1] Installing files. 00:03:52.262 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:52.263 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:52.264 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:52.265 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:52.266 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:52.266 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:52.266 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:52.266 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:52.266 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:52.266 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:52.266 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:52.266 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:52.266 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:52.266 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:52.266 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:52.266 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.266 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.266 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.266 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.266 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.266 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.266 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.266 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.266 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.266 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.266 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.266 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.266 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.266 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.526 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.526 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.526 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.526 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.526 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.526 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.526 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.526 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.526 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.526 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.526 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.526 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.526 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.526 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.526 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.526 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.526 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.526 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.526 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:52.527 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:52.528 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:52.528 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:52.528 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:52.528 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:52.528 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:52.528 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:52.528 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:52.528 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:52.528 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:52.528 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:52.528 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.528 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.529 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.529 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.529 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.529 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.529 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.529 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.529 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.529 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.529 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.529 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.529 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.789 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.789 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.789 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.789 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:52.789 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.789 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:52.789 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.789 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:52.789 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:52.789 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:52.789 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:52.789 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:52.789 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:52.789 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:52.789 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:52.789 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:52.790 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:52.790 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:52.790 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:52.790 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:52.790 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:52.790 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:52.790 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:52.790 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:52.790 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:52.790 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:52.790 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:52.790 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:52.790 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:52.790 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.790 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:52.791 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:52.791 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:03:52.791 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:52.791 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:03:52.791 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:52.791 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:03:52.791 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:52.791 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:03:52.791 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:52.791 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:03:52.791 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:52.791 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:03:52.791 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:52.791 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:03:52.791 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:52.791 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:03:52.791 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:52.791 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:03:52.791 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:52.791 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:03:52.791 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:52.791 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:03:52.791 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:52.791 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:03:52.791 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:52.791 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:03:52.791 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:52.791 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:03:52.791 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:52.791 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:03:52.791 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:52.791 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:03:52.791 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:52.791 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:03:52.791 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:52.791 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:03:52.791 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:52.791 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:03:52.791 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:52.791 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:03:52.791 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:52.791 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:03:52.791 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:52.791 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:03:52.791 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:52.791 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:03:52.791 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:52.791 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:03:52.791 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:52.791 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:03:52.791 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:52.791 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:03:52.791 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:52.791 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:03:52.791 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:52.791 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:03:52.791 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:52.791 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:03:52.791 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:52.791 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:03:52.791 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:52.791 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:03:52.791 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:52.791 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:03:52.791 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:52.791 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:03:52.791 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:52.791 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:03:52.792 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:52.792 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:03:52.792 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:52.792 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:03:52.792 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:52.792 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:03:52.792 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:52.792 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:03:52.792 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:52.792 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:03:52.792 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:52.792 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:03:52.792 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:52.792 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:03:52.792 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:52.792 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:03:52.792 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:52.792 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:03:52.792 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:52.792 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:03:52.792 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:52.792 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:03:52.792 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:52.792 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:52.792 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:52.792 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:52.792 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:52.792 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:52.792 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:52.792 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:52.792 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:52.792 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:52.792 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:52.792 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:52.792 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:52.792 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:03:52.792 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:52.792 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:03:52.792 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:52.792 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:03:52.792 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:52.792 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:03:52.792 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:52.792 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:03:52.792 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:52.792 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:03:52.792 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:52.792 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:03:52.792 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:52.792 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:03:52.792 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:52.792 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:03:52.792 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:52.792 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:03:52.792 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:52.792 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:03:52.792 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:52.792 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:52.792 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:52.792 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:52.792 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:52.792 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:52.792 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:52.792 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:52.792 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:52.792 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:53.049 05:02:42 -- common/autobuild_common.sh@192 -- $ uname -s 00:03:53.049 05:02:42 -- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:53.049 05:02:42 -- common/autobuild_common.sh@203 -- $ cat 00:03:53.049 05:02:42 -- common/autobuild_common.sh@208 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:53.050 00:03:53.050 real 1m0.631s 00:03:53.050 user 7m29.741s 00:03:53.050 sys 1m3.471s 00:03:53.050 05:02:42 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:03:53.050 05:02:42 -- common/autotest_common.sh@10 -- $ set +x 00:03:53.050 ************************************ 00:03:53.050 END TEST build_native_dpdk 00:03:53.050 ************************************ 00:03:53.050 05:02:42 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:53.050 05:02:42 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:53.050 05:02:42 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:53.050 05:02:42 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:53.050 05:02:42 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:53.050 05:02:42 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:53.050 05:02:42 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:53.050 05:02:42 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:03:53.050 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:53.307 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:53.307 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:53.307 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:53.566 Using 'verbs' RDMA provider 00:04:09.441 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:04:21.661 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:04:21.661 Creating mk/config.mk...done. 00:04:21.661 Creating mk/cc.flags.mk...done. 00:04:21.661 Type 'make' to build. 00:04:21.661 05:03:10 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:04:21.661 05:03:10 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:04:21.661 05:03:10 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:04:21.661 05:03:10 -- common/autotest_common.sh@10 -- $ set +x 00:04:21.661 ************************************ 00:04:21.661 START TEST make 00:04:21.661 ************************************ 00:04:21.661 05:03:10 -- common/autotest_common.sh@1114 -- $ make -j10 00:04:21.661 make[1]: Nothing to be done for 'all'. 00:04:43.597 CC lib/ut_mock/mock.o 00:04:43.597 CC lib/log/log.o 00:04:43.597 CC lib/log/log_flags.o 00:04:43.597 CC lib/log/log_deprecated.o 00:04:43.597 CC lib/ut/ut.o 00:04:43.597 LIB libspdk_ut_mock.a 00:04:43.597 SO libspdk_ut_mock.so.5.0 00:04:43.597 LIB libspdk_log.a 00:04:43.597 LIB libspdk_ut.a 00:04:43.597 SO libspdk_log.so.6.1 00:04:43.597 SYMLINK libspdk_ut_mock.so 00:04:43.597 SO libspdk_ut.so.1.0 00:04:43.597 SYMLINK libspdk_log.so 00:04:43.597 SYMLINK libspdk_ut.so 00:04:43.597 CC lib/ioat/ioat.o 00:04:43.597 CXX lib/trace_parser/trace.o 00:04:43.597 CC lib/dma/dma.o 00:04:43.597 CC lib/util/base64.o 00:04:43.597 CC lib/util/bit_array.o 00:04:43.597 CC lib/util/cpuset.o 00:04:43.597 CC lib/util/crc16.o 00:04:43.597 CC lib/util/crc32.o 00:04:43.597 CC lib/util/crc32c.o 00:04:43.597 CC lib/vfio_user/host/vfio_user_pci.o 00:04:43.597 CC lib/util/crc32_ieee.o 00:04:43.597 CC lib/util/crc64.o 00:04:43.597 CC lib/vfio_user/host/vfio_user.o 00:04:43.597 CC lib/util/dif.o 00:04:43.597 LIB libspdk_dma.a 00:04:43.597 CC lib/util/fd.o 00:04:43.597 CC lib/util/file.o 00:04:43.597 SO libspdk_dma.so.3.0 00:04:43.597 LIB libspdk_ioat.a 00:04:43.597 CC lib/util/hexlify.o 00:04:43.597 SYMLINK libspdk_dma.so 00:04:43.597 CC lib/util/iov.o 00:04:43.597 CC lib/util/math.o 00:04:43.597 SO libspdk_ioat.so.6.0 00:04:43.597 CC lib/util/pipe.o 00:04:43.597 CC lib/util/strerror_tls.o 00:04:43.597 LIB libspdk_vfio_user.a 00:04:43.597 SYMLINK libspdk_ioat.so 00:04:43.597 CC lib/util/string.o 00:04:43.597 CC lib/util/uuid.o 00:04:43.597 SO libspdk_vfio_user.so.4.0 00:04:43.597 CC lib/util/fd_group.o 00:04:43.597 SYMLINK libspdk_vfio_user.so 00:04:43.597 CC lib/util/xor.o 00:04:43.597 CC lib/util/zipf.o 00:04:43.597 LIB libspdk_util.a 00:04:43.597 SO libspdk_util.so.8.0 00:04:43.597 SYMLINK libspdk_util.so 00:04:43.597 LIB libspdk_trace_parser.a 00:04:43.597 CC lib/idxd/idxd.o 00:04:43.597 CC lib/conf/conf.o 00:04:43.597 CC lib/idxd/idxd_user.o 00:04:43.597 CC lib/idxd/idxd_kernel.o 00:04:43.597 CC lib/env_dpdk/env.o 00:04:43.597 CC lib/vmd/vmd.o 00:04:43.597 CC lib/env_dpdk/memory.o 00:04:43.597 CC lib/json/json_parse.o 00:04:43.597 CC lib/rdma/common.o 00:04:43.597 SO libspdk_trace_parser.so.4.0 00:04:43.871 SYMLINK libspdk_trace_parser.so 00:04:43.871 CC lib/json/json_util.o 00:04:43.871 CC lib/json/json_write.o 00:04:43.871 LIB libspdk_conf.a 00:04:43.871 CC lib/env_dpdk/pci.o 00:04:43.871 CC lib/env_dpdk/init.o 00:04:43.871 SO libspdk_conf.so.5.0 00:04:43.871 CC lib/rdma/rdma_verbs.o 00:04:43.871 SYMLINK libspdk_conf.so 00:04:43.871 CC lib/env_dpdk/threads.o 00:04:43.871 CC lib/env_dpdk/pci_ioat.o 00:04:44.129 LIB libspdk_json.a 00:04:44.129 CC lib/env_dpdk/pci_virtio.o 00:04:44.129 SO libspdk_json.so.5.1 00:04:44.129 CC lib/env_dpdk/pci_vmd.o 00:04:44.129 LIB libspdk_rdma.a 00:04:44.129 LIB libspdk_idxd.a 00:04:44.129 SO libspdk_rdma.so.5.0 00:04:44.129 SYMLINK libspdk_json.so 00:04:44.129 SO libspdk_idxd.so.11.0 00:04:44.129 CC lib/env_dpdk/pci_idxd.o 00:04:44.129 CC lib/env_dpdk/pci_event.o 00:04:44.129 CC lib/vmd/led.o 00:04:44.129 SYMLINK libspdk_rdma.so 00:04:44.129 SYMLINK libspdk_idxd.so 00:04:44.129 CC lib/env_dpdk/sigbus_handler.o 00:04:44.129 CC lib/env_dpdk/pci_dpdk.o 00:04:44.129 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:44.129 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:44.390 CC lib/jsonrpc/jsonrpc_server.o 00:04:44.390 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:44.390 LIB libspdk_vmd.a 00:04:44.390 CC lib/jsonrpc/jsonrpc_client.o 00:04:44.390 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:44.390 SO libspdk_vmd.so.5.0 00:04:44.390 SYMLINK libspdk_vmd.so 00:04:44.647 LIB libspdk_jsonrpc.a 00:04:44.647 SO libspdk_jsonrpc.so.5.1 00:04:44.647 SYMLINK libspdk_jsonrpc.so 00:04:44.908 CC lib/rpc/rpc.o 00:04:44.908 LIB libspdk_env_dpdk.a 00:04:45.166 SO libspdk_env_dpdk.so.13.0 00:04:45.166 LIB libspdk_rpc.a 00:04:45.166 SO libspdk_rpc.so.5.0 00:04:45.166 SYMLINK libspdk_rpc.so 00:04:45.166 SYMLINK libspdk_env_dpdk.so 00:04:45.166 CC lib/trace/trace.o 00:04:45.166 CC lib/trace/trace_flags.o 00:04:45.166 CC lib/notify/notify_rpc.o 00:04:45.166 CC lib/notify/notify.o 00:04:45.166 CC lib/sock/sock_rpc.o 00:04:45.166 CC lib/trace/trace_rpc.o 00:04:45.166 CC lib/sock/sock.o 00:04:45.425 LIB libspdk_notify.a 00:04:45.425 SO libspdk_notify.so.5.0 00:04:45.425 LIB libspdk_trace.a 00:04:45.425 SO libspdk_trace.so.9.0 00:04:45.682 SYMLINK libspdk_notify.so 00:04:45.682 SYMLINK libspdk_trace.so 00:04:45.682 LIB libspdk_sock.a 00:04:45.682 SO libspdk_sock.so.8.0 00:04:45.682 SYMLINK libspdk_sock.so 00:04:45.682 CC lib/thread/thread.o 00:04:45.682 CC lib/thread/iobuf.o 00:04:45.951 CC lib/nvme/nvme_ctrlr.o 00:04:45.951 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:45.951 CC lib/nvme/nvme_ns_cmd.o 00:04:45.951 CC lib/nvme/nvme_fabric.o 00:04:45.951 CC lib/nvme/nvme_pcie.o 00:04:45.951 CC lib/nvme/nvme_qpair.o 00:04:45.951 CC lib/nvme/nvme_ns.o 00:04:45.951 CC lib/nvme/nvme_pcie_common.o 00:04:46.251 CC lib/nvme/nvme.o 00:04:46.523 CC lib/nvme/nvme_quirks.o 00:04:46.780 CC lib/nvme/nvme_transport.o 00:04:46.780 CC lib/nvme/nvme_discovery.o 00:04:46.780 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:46.780 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:46.780 CC lib/nvme/nvme_tcp.o 00:04:47.056 CC lib/nvme/nvme_opal.o 00:04:47.056 CC lib/nvme/nvme_io_msg.o 00:04:47.315 CC lib/nvme/nvme_poll_group.o 00:04:47.315 CC lib/nvme/nvme_zns.o 00:04:47.315 CC lib/nvme/nvme_cuse.o 00:04:47.315 LIB libspdk_thread.a 00:04:47.315 SO libspdk_thread.so.9.0 00:04:47.572 CC lib/nvme/nvme_vfio_user.o 00:04:47.572 SYMLINK libspdk_thread.so 00:04:47.572 CC lib/nvme/nvme_rdma.o 00:04:47.572 CC lib/accel/accel.o 00:04:47.572 CC lib/blob/blobstore.o 00:04:47.831 CC lib/blob/request.o 00:04:47.831 CC lib/blob/zeroes.o 00:04:48.088 CC lib/accel/accel_rpc.o 00:04:48.088 CC lib/blob/blob_bs_dev.o 00:04:48.088 CC lib/accel/accel_sw.o 00:04:48.088 CC lib/init/json_config.o 00:04:48.088 CC lib/init/subsystem.o 00:04:48.347 CC lib/virtio/virtio.o 00:04:48.347 CC lib/virtio/virtio_vhost_user.o 00:04:48.347 CC lib/virtio/virtio_vfio_user.o 00:04:48.347 CC lib/init/subsystem_rpc.o 00:04:48.347 CC lib/init/rpc.o 00:04:48.347 CC lib/virtio/virtio_pci.o 00:04:48.604 LIB libspdk_init.a 00:04:48.604 SO libspdk_init.so.4.0 00:04:48.604 LIB libspdk_accel.a 00:04:48.604 SYMLINK libspdk_init.so 00:04:48.604 SO libspdk_accel.so.14.0 00:04:48.604 LIB libspdk_virtio.a 00:04:48.604 SO libspdk_virtio.so.6.0 00:04:48.604 SYMLINK libspdk_accel.so 00:04:48.870 CC lib/event/reactor.o 00:04:48.870 CC lib/event/app.o 00:04:48.870 CC lib/event/scheduler_static.o 00:04:48.870 CC lib/event/log_rpc.o 00:04:48.870 CC lib/event/app_rpc.o 00:04:48.870 SYMLINK libspdk_virtio.so 00:04:48.870 LIB libspdk_nvme.a 00:04:48.870 CC lib/bdev/bdev.o 00:04:48.870 CC lib/bdev/bdev_rpc.o 00:04:48.870 CC lib/bdev/bdev_zone.o 00:04:48.870 CC lib/bdev/part.o 00:04:48.870 CC lib/bdev/scsi_nvme.o 00:04:49.128 SO libspdk_nvme.so.12.0 00:04:49.128 LIB libspdk_event.a 00:04:49.128 SO libspdk_event.so.12.0 00:04:49.388 SYMLINK libspdk_nvme.so 00:04:49.388 SYMLINK libspdk_event.so 00:04:50.326 LIB libspdk_blob.a 00:04:50.326 SO libspdk_blob.so.10.1 00:04:50.601 SYMLINK libspdk_blob.so 00:04:50.601 CC lib/blobfs/blobfs.o 00:04:50.601 CC lib/blobfs/tree.o 00:04:50.601 CC lib/lvol/lvol.o 00:04:51.537 LIB libspdk_bdev.a 00:04:51.537 SO libspdk_bdev.so.14.0 00:04:51.537 SYMLINK libspdk_bdev.so 00:04:51.537 LIB libspdk_blobfs.a 00:04:51.537 LIB libspdk_lvol.a 00:04:51.537 SO libspdk_blobfs.so.9.0 00:04:51.537 SO libspdk_lvol.so.9.1 00:04:51.537 CC lib/nbd/nbd.o 00:04:51.537 CC lib/nbd/nbd_rpc.o 00:04:51.537 CC lib/scsi/lun.o 00:04:51.537 CC lib/scsi/dev.o 00:04:51.537 CC lib/scsi/port.o 00:04:51.537 CC lib/ftl/ftl_core.o 00:04:51.537 CC lib/nvmf/ctrlr.o 00:04:51.537 CC lib/ublk/ublk.o 00:04:51.537 SYMLINK libspdk_lvol.so 00:04:51.537 SYMLINK libspdk_blobfs.so 00:04:51.537 CC lib/ublk/ublk_rpc.o 00:04:51.537 CC lib/scsi/scsi.o 00:04:51.794 CC lib/scsi/scsi_bdev.o 00:04:51.794 CC lib/nvmf/ctrlr_discovery.o 00:04:51.794 CC lib/scsi/scsi_pr.o 00:04:51.794 CC lib/ftl/ftl_init.o 00:04:51.794 CC lib/scsi/scsi_rpc.o 00:04:51.794 CC lib/scsi/task.o 00:04:52.053 CC lib/ftl/ftl_layout.o 00:04:52.053 CC lib/nvmf/ctrlr_bdev.o 00:04:52.053 LIB libspdk_nbd.a 00:04:52.053 CC lib/ftl/ftl_debug.o 00:04:52.053 SO libspdk_nbd.so.6.0 00:04:52.053 SYMLINK libspdk_nbd.so 00:04:52.053 CC lib/nvmf/subsystem.o 00:04:52.053 CC lib/nvmf/nvmf.o 00:04:52.053 CC lib/nvmf/nvmf_rpc.o 00:04:52.310 LIB libspdk_ublk.a 00:04:52.310 LIB libspdk_scsi.a 00:04:52.310 CC lib/nvmf/transport.o 00:04:52.310 CC lib/nvmf/tcp.o 00:04:52.310 SO libspdk_ublk.so.2.0 00:04:52.310 SO libspdk_scsi.so.8.0 00:04:52.310 CC lib/ftl/ftl_io.o 00:04:52.310 SYMLINK libspdk_ublk.so 00:04:52.310 CC lib/ftl/ftl_sb.o 00:04:52.310 SYMLINK libspdk_scsi.so 00:04:52.310 CC lib/nvmf/rdma.o 00:04:52.567 CC lib/ftl/ftl_l2p.o 00:04:52.567 CC lib/ftl/ftl_l2p_flat.o 00:04:52.567 CC lib/ftl/ftl_nv_cache.o 00:04:52.824 CC lib/ftl/ftl_band.o 00:04:52.824 CC lib/ftl/ftl_band_ops.o 00:04:52.824 CC lib/ftl/ftl_writer.o 00:04:52.824 CC lib/ftl/ftl_rq.o 00:04:53.081 CC lib/ftl/ftl_reloc.o 00:04:53.081 CC lib/iscsi/conn.o 00:04:53.081 CC lib/ftl/ftl_l2p_cache.o 00:04:53.081 CC lib/ftl/ftl_p2l.o 00:04:53.360 CC lib/ftl/mngt/ftl_mngt.o 00:04:53.360 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:53.360 CC lib/vhost/vhost.o 00:04:53.360 CC lib/vhost/vhost_rpc.o 00:04:53.360 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:53.618 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:53.618 CC lib/vhost/vhost_scsi.o 00:04:53.618 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:53.618 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:53.618 CC lib/vhost/vhost_blk.o 00:04:53.618 CC lib/vhost/rte_vhost_user.o 00:04:53.877 CC lib/iscsi/init_grp.o 00:04:53.877 CC lib/iscsi/iscsi.o 00:04:53.877 CC lib/iscsi/md5.o 00:04:53.877 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:54.136 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:54.136 CC lib/iscsi/param.o 00:04:54.136 CC lib/iscsi/portal_grp.o 00:04:54.136 CC lib/iscsi/tgt_node.o 00:04:54.136 CC lib/iscsi/iscsi_subsystem.o 00:04:54.136 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:54.393 CC lib/iscsi/iscsi_rpc.o 00:04:54.393 CC lib/iscsi/task.o 00:04:54.393 LIB libspdk_nvmf.a 00:04:54.393 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:54.651 SO libspdk_nvmf.so.17.0 00:04:54.651 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:54.652 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:54.652 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:54.652 CC lib/ftl/utils/ftl_conf.o 00:04:54.652 CC lib/ftl/utils/ftl_md.o 00:04:54.652 SYMLINK libspdk_nvmf.so 00:04:54.652 CC lib/ftl/utils/ftl_mempool.o 00:04:54.652 CC lib/ftl/utils/ftl_bitmap.o 00:04:54.652 CC lib/ftl/utils/ftl_property.o 00:04:54.652 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:54.652 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:54.652 LIB libspdk_vhost.a 00:04:54.910 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:54.910 SO libspdk_vhost.so.7.1 00:04:54.910 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:54.910 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:54.910 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:54.910 SYMLINK libspdk_vhost.so 00:04:54.910 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:54.910 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:54.910 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:54.910 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:54.910 CC lib/ftl/base/ftl_base_dev.o 00:04:55.167 CC lib/ftl/base/ftl_base_bdev.o 00:04:55.167 CC lib/ftl/ftl_trace.o 00:04:55.167 LIB libspdk_iscsi.a 00:04:55.426 SO libspdk_iscsi.so.7.0 00:04:55.426 LIB libspdk_ftl.a 00:04:55.426 SYMLINK libspdk_iscsi.so 00:04:55.687 SO libspdk_ftl.so.8.0 00:04:55.687 SYMLINK libspdk_ftl.so 00:04:55.945 CC module/env_dpdk/env_dpdk_rpc.o 00:04:55.945 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:55.945 CC module/blob/bdev/blob_bdev.o 00:04:55.945 CC module/accel/error/accel_error.o 00:04:55.945 CC module/accel/dsa/accel_dsa.o 00:04:55.945 CC module/scheduler/gscheduler/gscheduler.o 00:04:55.945 CC module/accel/ioat/accel_ioat.o 00:04:55.945 CC module/accel/iaa/accel_iaa.o 00:04:55.945 CC module/sock/posix/posix.o 00:04:55.945 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:56.203 LIB libspdk_env_dpdk_rpc.a 00:04:56.203 SO libspdk_env_dpdk_rpc.so.5.0 00:04:56.203 LIB libspdk_scheduler_gscheduler.a 00:04:56.203 LIB libspdk_scheduler_dpdk_governor.a 00:04:56.203 SO libspdk_scheduler_gscheduler.so.3.0 00:04:56.203 SYMLINK libspdk_env_dpdk_rpc.so 00:04:56.203 SO libspdk_scheduler_dpdk_governor.so.3.0 00:04:56.203 CC module/accel/ioat/accel_ioat_rpc.o 00:04:56.203 CC module/accel/error/accel_error_rpc.o 00:04:56.203 LIB libspdk_scheduler_dynamic.a 00:04:56.203 CC module/accel/iaa/accel_iaa_rpc.o 00:04:56.203 SYMLINK libspdk_scheduler_gscheduler.so 00:04:56.203 CC module/accel/dsa/accel_dsa_rpc.o 00:04:56.203 SO libspdk_scheduler_dynamic.so.3.0 00:04:56.203 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:56.203 LIB libspdk_blob_bdev.a 00:04:56.461 SYMLINK libspdk_scheduler_dynamic.so 00:04:56.461 SO libspdk_blob_bdev.so.10.1 00:04:56.461 CC module/sock/uring/uring.o 00:04:56.461 LIB libspdk_accel_ioat.a 00:04:56.461 SYMLINK libspdk_blob_bdev.so 00:04:56.461 LIB libspdk_accel_error.a 00:04:56.461 LIB libspdk_accel_iaa.a 00:04:56.461 SO libspdk_accel_ioat.so.5.0 00:04:56.461 LIB libspdk_accel_dsa.a 00:04:56.461 SO libspdk_accel_error.so.1.0 00:04:56.461 SO libspdk_accel_iaa.so.2.0 00:04:56.461 SO libspdk_accel_dsa.so.4.0 00:04:56.461 SYMLINK libspdk_accel_ioat.so 00:04:56.461 SYMLINK libspdk_accel_error.so 00:04:56.461 SYMLINK libspdk_accel_iaa.so 00:04:56.461 SYMLINK libspdk_accel_dsa.so 00:04:56.461 CC module/bdev/error/vbdev_error.o 00:04:56.461 CC module/blobfs/bdev/blobfs_bdev.o 00:04:56.461 CC module/bdev/delay/vbdev_delay.o 00:04:56.461 CC module/bdev/gpt/gpt.o 00:04:56.719 CC module/bdev/malloc/bdev_malloc.o 00:04:56.719 CC module/bdev/lvol/vbdev_lvol.o 00:04:56.719 CC module/bdev/nvme/bdev_nvme.o 00:04:56.719 CC module/bdev/null/bdev_null.o 00:04:56.719 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:56.719 LIB libspdk_sock_posix.a 00:04:56.719 CC module/bdev/gpt/vbdev_gpt.o 00:04:56.719 SO libspdk_sock_posix.so.5.0 00:04:56.719 CC module/bdev/error/vbdev_error_rpc.o 00:04:56.978 SYMLINK libspdk_sock_posix.so 00:04:56.978 CC module/bdev/null/bdev_null_rpc.o 00:04:56.978 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:56.978 LIB libspdk_blobfs_bdev.a 00:04:56.978 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:56.978 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:56.978 SO libspdk_blobfs_bdev.so.5.0 00:04:56.978 LIB libspdk_bdev_error.a 00:04:56.978 SYMLINK libspdk_blobfs_bdev.so 00:04:56.978 SO libspdk_bdev_error.so.5.0 00:04:56.978 LIB libspdk_bdev_null.a 00:04:56.978 LIB libspdk_bdev_gpt.a 00:04:56.978 LIB libspdk_sock_uring.a 00:04:56.978 SYMLINK libspdk_bdev_error.so 00:04:56.978 SO libspdk_bdev_null.so.5.0 00:04:57.236 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:57.236 SO libspdk_bdev_gpt.so.5.0 00:04:57.236 LIB libspdk_bdev_delay.a 00:04:57.236 SO libspdk_sock_uring.so.4.0 00:04:57.236 LIB libspdk_bdev_malloc.a 00:04:57.236 CC module/bdev/passthru/vbdev_passthru.o 00:04:57.236 SO libspdk_bdev_malloc.so.5.0 00:04:57.236 SO libspdk_bdev_delay.so.5.0 00:04:57.236 SYMLINK libspdk_bdev_null.so 00:04:57.236 SYMLINK libspdk_sock_uring.so 00:04:57.236 SYMLINK libspdk_bdev_gpt.so 00:04:57.236 CC module/bdev/nvme/nvme_rpc.o 00:04:57.236 CC module/bdev/nvme/bdev_mdns_client.o 00:04:57.236 SYMLINK libspdk_bdev_delay.so 00:04:57.236 SYMLINK libspdk_bdev_malloc.so 00:04:57.236 LIB libspdk_bdev_lvol.a 00:04:57.236 CC module/bdev/nvme/vbdev_opal.o 00:04:57.236 CC module/bdev/raid/bdev_raid.o 00:04:57.236 SO libspdk_bdev_lvol.so.5.0 00:04:57.236 CC module/bdev/split/vbdev_split.o 00:04:57.236 SYMLINK libspdk_bdev_lvol.so 00:04:57.495 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:57.495 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:57.495 CC module/bdev/uring/bdev_uring.o 00:04:57.495 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:57.495 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:57.495 CC module/bdev/raid/bdev_raid_rpc.o 00:04:57.495 CC module/bdev/aio/bdev_aio.o 00:04:57.495 CC module/bdev/split/vbdev_split_rpc.o 00:04:57.754 LIB libspdk_bdev_passthru.a 00:04:57.754 SO libspdk_bdev_passthru.so.5.0 00:04:57.754 LIB libspdk_bdev_zone_block.a 00:04:57.754 CC module/bdev/aio/bdev_aio_rpc.o 00:04:57.754 SYMLINK libspdk_bdev_passthru.so 00:04:57.754 CC module/bdev/uring/bdev_uring_rpc.o 00:04:57.754 LIB libspdk_bdev_split.a 00:04:57.754 CC module/bdev/raid/bdev_raid_sb.o 00:04:57.754 SO libspdk_bdev_zone_block.so.5.0 00:04:57.754 SO libspdk_bdev_split.so.5.0 00:04:57.754 SYMLINK libspdk_bdev_zone_block.so 00:04:57.754 SYMLINK libspdk_bdev_split.so 00:04:57.754 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:57.754 CC module/bdev/raid/raid0.o 00:04:57.754 CC module/bdev/raid/raid1.o 00:04:57.754 CC module/bdev/ftl/bdev_ftl.o 00:04:58.012 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:58.012 LIB libspdk_bdev_aio.a 00:04:58.012 LIB libspdk_bdev_uring.a 00:04:58.012 SO libspdk_bdev_aio.so.5.0 00:04:58.012 SO libspdk_bdev_uring.so.5.0 00:04:58.012 SYMLINK libspdk_bdev_aio.so 00:04:58.012 SYMLINK libspdk_bdev_uring.so 00:04:58.012 CC module/bdev/raid/concat.o 00:04:58.012 CC module/bdev/iscsi/bdev_iscsi.o 00:04:58.012 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:58.012 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:58.012 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:58.012 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:58.270 LIB libspdk_bdev_ftl.a 00:04:58.270 SO libspdk_bdev_ftl.so.5.0 00:04:58.270 SYMLINK libspdk_bdev_ftl.so 00:04:58.270 LIB libspdk_bdev_raid.a 00:04:58.270 SO libspdk_bdev_raid.so.5.0 00:04:58.528 SYMLINK libspdk_bdev_raid.so 00:04:58.528 LIB libspdk_bdev_iscsi.a 00:04:58.528 SO libspdk_bdev_iscsi.so.5.0 00:04:58.528 SYMLINK libspdk_bdev_iscsi.so 00:04:58.787 LIB libspdk_bdev_virtio.a 00:04:58.787 SO libspdk_bdev_virtio.so.5.0 00:04:58.787 SYMLINK libspdk_bdev_virtio.so 00:04:58.787 LIB libspdk_bdev_nvme.a 00:04:59.046 SO libspdk_bdev_nvme.so.6.0 00:04:59.046 SYMLINK libspdk_bdev_nvme.so 00:04:59.328 CC module/event/subsystems/sock/sock.o 00:04:59.328 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:59.328 CC module/event/subsystems/vmd/vmd.o 00:04:59.328 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:59.328 CC module/event/subsystems/iobuf/iobuf.o 00:04:59.328 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:59.328 CC module/event/subsystems/scheduler/scheduler.o 00:04:59.328 LIB libspdk_event_sock.a 00:04:59.586 LIB libspdk_event_vhost_blk.a 00:04:59.586 LIB libspdk_event_iobuf.a 00:04:59.586 LIB libspdk_event_scheduler.a 00:04:59.586 SO libspdk_event_sock.so.4.0 00:04:59.586 LIB libspdk_event_vmd.a 00:04:59.586 SO libspdk_event_vhost_blk.so.2.0 00:04:59.586 SO libspdk_event_scheduler.so.3.0 00:04:59.586 SO libspdk_event_iobuf.so.2.0 00:04:59.586 SO libspdk_event_vmd.so.5.0 00:04:59.586 SYMLINK libspdk_event_vhost_blk.so 00:04:59.586 SYMLINK libspdk_event_sock.so 00:04:59.586 SYMLINK libspdk_event_scheduler.so 00:04:59.586 SYMLINK libspdk_event_vmd.so 00:04:59.586 SYMLINK libspdk_event_iobuf.so 00:04:59.845 CC module/event/subsystems/accel/accel.o 00:04:59.845 LIB libspdk_event_accel.a 00:04:59.845 SO libspdk_event_accel.so.5.0 00:05:00.104 SYMLINK libspdk_event_accel.so 00:05:00.104 CC module/event/subsystems/bdev/bdev.o 00:05:00.361 LIB libspdk_event_bdev.a 00:05:00.361 SO libspdk_event_bdev.so.5.0 00:05:00.619 SYMLINK libspdk_event_bdev.so 00:05:00.619 CC module/event/subsystems/scsi/scsi.o 00:05:00.619 CC module/event/subsystems/ublk/ublk.o 00:05:00.619 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:00.619 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:00.619 CC module/event/subsystems/nbd/nbd.o 00:05:00.876 LIB libspdk_event_ublk.a 00:05:00.876 LIB libspdk_event_nbd.a 00:05:00.876 LIB libspdk_event_scsi.a 00:05:00.876 SO libspdk_event_ublk.so.2.0 00:05:00.876 SO libspdk_event_nbd.so.5.0 00:05:00.876 SO libspdk_event_scsi.so.5.0 00:05:00.876 SYMLINK libspdk_event_ublk.so 00:05:00.876 SYMLINK libspdk_event_nbd.so 00:05:00.876 SYMLINK libspdk_event_scsi.so 00:05:00.876 LIB libspdk_event_nvmf.a 00:05:00.876 SO libspdk_event_nvmf.so.5.0 00:05:01.134 SYMLINK libspdk_event_nvmf.so 00:05:01.134 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:01.134 CC module/event/subsystems/iscsi/iscsi.o 00:05:01.134 LIB libspdk_event_iscsi.a 00:05:01.134 LIB libspdk_event_vhost_scsi.a 00:05:01.134 SO libspdk_event_iscsi.so.5.0 00:05:01.134 SO libspdk_event_vhost_scsi.so.2.0 00:05:01.392 SYMLINK libspdk_event_iscsi.so 00:05:01.392 SYMLINK libspdk_event_vhost_scsi.so 00:05:01.392 SO libspdk.so.5.0 00:05:01.392 SYMLINK libspdk.so 00:05:01.650 CXX app/trace/trace.o 00:05:01.650 CC app/trace_record/trace_record.o 00:05:01.650 CC app/spdk_nvme_perf/perf.o 00:05:01.650 CC app/spdk_lspci/spdk_lspci.o 00:05:01.650 CC app/iscsi_tgt/iscsi_tgt.o 00:05:01.650 CC app/nvmf_tgt/nvmf_main.o 00:05:01.650 CC app/spdk_tgt/spdk_tgt.o 00:05:01.650 CC examples/accel/perf/accel_perf.o 00:05:01.650 CC test/accel/dif/dif.o 00:05:01.650 CC test/app/bdev_svc/bdev_svc.o 00:05:01.650 LINK spdk_lspci 00:05:01.909 LINK nvmf_tgt 00:05:01.909 LINK spdk_trace_record 00:05:01.909 LINK iscsi_tgt 00:05:01.909 LINK spdk_tgt 00:05:01.909 LINK bdev_svc 00:05:01.909 CC app/spdk_nvme_identify/identify.o 00:05:02.167 LINK spdk_trace 00:05:02.167 LINK dif 00:05:02.167 CC app/spdk_nvme_discover/discovery_aer.o 00:05:02.167 CC app/spdk_top/spdk_top.o 00:05:02.167 CC test/bdev/bdevio/bdevio.o 00:05:02.167 LINK accel_perf 00:05:02.167 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:02.167 CC app/vhost/vhost.o 00:05:02.167 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:02.425 LINK spdk_nvme_discover 00:05:02.425 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:02.425 LINK spdk_nvme_perf 00:05:02.425 LINK vhost 00:05:02.425 CC examples/bdev/hello_world/hello_bdev.o 00:05:02.425 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:02.683 LINK bdevio 00:05:02.683 CC examples/bdev/bdevperf/bdevperf.o 00:05:02.683 LINK nvme_fuzz 00:05:02.683 CC app/spdk_dd/spdk_dd.o 00:05:02.683 LINK hello_bdev 00:05:02.683 CC test/blobfs/mkfs/mkfs.o 00:05:02.683 LINK spdk_nvme_identify 00:05:02.942 CC examples/blob/hello_world/hello_blob.o 00:05:02.942 CC examples/ioat/perf/perf.o 00:05:02.942 LINK vhost_fuzz 00:05:02.942 CC examples/ioat/verify/verify.o 00:05:02.942 LINK mkfs 00:05:02.942 LINK spdk_top 00:05:02.942 CC examples/blob/cli/blobcli.o 00:05:03.200 LINK spdk_dd 00:05:03.201 LINK ioat_perf 00:05:03.201 TEST_HEADER include/spdk/accel.h 00:05:03.201 TEST_HEADER include/spdk/accel_module.h 00:05:03.201 TEST_HEADER include/spdk/assert.h 00:05:03.201 TEST_HEADER include/spdk/barrier.h 00:05:03.201 TEST_HEADER include/spdk/base64.h 00:05:03.201 LINK hello_blob 00:05:03.201 TEST_HEADER include/spdk/bdev.h 00:05:03.201 TEST_HEADER include/spdk/bdev_module.h 00:05:03.201 TEST_HEADER include/spdk/bdev_zone.h 00:05:03.201 TEST_HEADER include/spdk/bit_array.h 00:05:03.201 TEST_HEADER include/spdk/bit_pool.h 00:05:03.201 TEST_HEADER include/spdk/blob_bdev.h 00:05:03.201 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:03.201 TEST_HEADER include/spdk/blobfs.h 00:05:03.201 TEST_HEADER include/spdk/blob.h 00:05:03.201 TEST_HEADER include/spdk/conf.h 00:05:03.201 TEST_HEADER include/spdk/config.h 00:05:03.201 TEST_HEADER include/spdk/cpuset.h 00:05:03.201 TEST_HEADER include/spdk/crc16.h 00:05:03.201 TEST_HEADER include/spdk/crc32.h 00:05:03.201 TEST_HEADER include/spdk/crc64.h 00:05:03.201 TEST_HEADER include/spdk/dif.h 00:05:03.201 TEST_HEADER include/spdk/dma.h 00:05:03.201 TEST_HEADER include/spdk/endian.h 00:05:03.201 TEST_HEADER include/spdk/env_dpdk.h 00:05:03.201 TEST_HEADER include/spdk/env.h 00:05:03.201 TEST_HEADER include/spdk/event.h 00:05:03.201 TEST_HEADER include/spdk/fd_group.h 00:05:03.201 TEST_HEADER include/spdk/fd.h 00:05:03.201 LINK verify 00:05:03.201 TEST_HEADER include/spdk/file.h 00:05:03.201 TEST_HEADER include/spdk/ftl.h 00:05:03.201 TEST_HEADER include/spdk/gpt_spec.h 00:05:03.201 TEST_HEADER include/spdk/hexlify.h 00:05:03.201 TEST_HEADER include/spdk/histogram_data.h 00:05:03.201 TEST_HEADER include/spdk/idxd.h 00:05:03.201 TEST_HEADER include/spdk/idxd_spec.h 00:05:03.201 TEST_HEADER include/spdk/init.h 00:05:03.201 TEST_HEADER include/spdk/ioat.h 00:05:03.201 TEST_HEADER include/spdk/ioat_spec.h 00:05:03.201 TEST_HEADER include/spdk/iscsi_spec.h 00:05:03.201 TEST_HEADER include/spdk/json.h 00:05:03.201 TEST_HEADER include/spdk/jsonrpc.h 00:05:03.201 TEST_HEADER include/spdk/likely.h 00:05:03.201 TEST_HEADER include/spdk/log.h 00:05:03.201 TEST_HEADER include/spdk/lvol.h 00:05:03.201 TEST_HEADER include/spdk/memory.h 00:05:03.201 TEST_HEADER include/spdk/mmio.h 00:05:03.201 TEST_HEADER include/spdk/nbd.h 00:05:03.201 TEST_HEADER include/spdk/notify.h 00:05:03.201 TEST_HEADER include/spdk/nvme.h 00:05:03.201 TEST_HEADER include/spdk/nvme_intel.h 00:05:03.201 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:03.201 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:03.201 CC examples/nvme/hello_world/hello_world.o 00:05:03.201 TEST_HEADER include/spdk/nvme_spec.h 00:05:03.201 TEST_HEADER include/spdk/nvme_zns.h 00:05:03.201 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:03.201 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:03.201 TEST_HEADER include/spdk/nvmf.h 00:05:03.201 CC examples/sock/hello_world/hello_sock.o 00:05:03.201 TEST_HEADER include/spdk/nvmf_spec.h 00:05:03.201 TEST_HEADER include/spdk/nvmf_transport.h 00:05:03.201 TEST_HEADER include/spdk/opal_spec.h 00:05:03.201 TEST_HEADER include/spdk/opal.h 00:05:03.201 TEST_HEADER include/spdk/pci_ids.h 00:05:03.201 TEST_HEADER include/spdk/pipe.h 00:05:03.201 TEST_HEADER include/spdk/queue.h 00:05:03.201 TEST_HEADER include/spdk/reduce.h 00:05:03.201 TEST_HEADER include/spdk/rpc.h 00:05:03.201 TEST_HEADER include/spdk/scheduler.h 00:05:03.201 TEST_HEADER include/spdk/scsi.h 00:05:03.201 TEST_HEADER include/spdk/scsi_spec.h 00:05:03.201 TEST_HEADER include/spdk/sock.h 00:05:03.201 TEST_HEADER include/spdk/stdinc.h 00:05:03.201 TEST_HEADER include/spdk/string.h 00:05:03.201 TEST_HEADER include/spdk/thread.h 00:05:03.201 TEST_HEADER include/spdk/trace.h 00:05:03.201 TEST_HEADER include/spdk/trace_parser.h 00:05:03.461 TEST_HEADER include/spdk/tree.h 00:05:03.461 TEST_HEADER include/spdk/ublk.h 00:05:03.461 TEST_HEADER include/spdk/util.h 00:05:03.461 TEST_HEADER include/spdk/uuid.h 00:05:03.461 TEST_HEADER include/spdk/version.h 00:05:03.461 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:03.461 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:03.461 TEST_HEADER include/spdk/vhost.h 00:05:03.461 TEST_HEADER include/spdk/vmd.h 00:05:03.461 TEST_HEADER include/spdk/xor.h 00:05:03.461 TEST_HEADER include/spdk/zipf.h 00:05:03.461 CXX test/cpp_headers/accel.o 00:05:03.461 CXX test/cpp_headers/accel_module.o 00:05:03.461 CC test/app/histogram_perf/histogram_perf.o 00:05:03.461 LINK bdevperf 00:05:03.461 CC test/app/jsoncat/jsoncat.o 00:05:03.461 CC app/fio/nvme/fio_plugin.o 00:05:03.461 LINK hello_world 00:05:03.461 LINK histogram_perf 00:05:03.461 CXX test/cpp_headers/assert.o 00:05:03.461 LINK blobcli 00:05:03.461 LINK hello_sock 00:05:03.719 LINK jsoncat 00:05:03.719 CC app/fio/bdev/fio_plugin.o 00:05:03.719 CC test/app/stub/stub.o 00:05:03.719 CXX test/cpp_headers/barrier.o 00:05:03.719 CC examples/nvme/reconnect/reconnect.o 00:05:03.719 CXX test/cpp_headers/base64.o 00:05:03.719 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:03.719 CXX test/cpp_headers/bdev.o 00:05:03.719 LINK stub 00:05:03.719 CC examples/vmd/lsvmd/lsvmd.o 00:05:03.979 CXX test/cpp_headers/bdev_module.o 00:05:03.979 LINK iscsi_fuzz 00:05:03.979 CC examples/vmd/led/led.o 00:05:03.979 CXX test/cpp_headers/bdev_zone.o 00:05:03.979 CXX test/cpp_headers/bit_array.o 00:05:03.979 LINK lsvmd 00:05:03.979 LINK spdk_nvme 00:05:03.979 CXX test/cpp_headers/bit_pool.o 00:05:03.979 LINK led 00:05:03.979 LINK reconnect 00:05:04.291 LINK spdk_bdev 00:05:04.291 CXX test/cpp_headers/blob_bdev.o 00:05:04.291 LINK nvme_manage 00:05:04.291 CC test/dma/test_dma/test_dma.o 00:05:04.291 CC test/event/event_perf/event_perf.o 00:05:04.291 CC test/event/reactor/reactor.o 00:05:04.291 CC test/env/vtophys/vtophys.o 00:05:04.291 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:04.291 CC test/event/reactor_perf/reactor_perf.o 00:05:04.291 CC test/env/mem_callbacks/mem_callbacks.o 00:05:04.291 CC test/lvol/esnap/esnap.o 00:05:04.291 CXX test/cpp_headers/blobfs_bdev.o 00:05:04.548 LINK event_perf 00:05:04.548 LINK reactor 00:05:04.548 CC examples/nvme/arbitration/arbitration.o 00:05:04.548 LINK vtophys 00:05:04.548 LINK reactor_perf 00:05:04.548 LINK env_dpdk_post_init 00:05:04.548 CXX test/cpp_headers/blobfs.o 00:05:04.548 CXX test/cpp_headers/blob.o 00:05:04.548 CC test/env/memory/memory_ut.o 00:05:04.548 LINK test_dma 00:05:04.806 CC test/env/pci/pci_ut.o 00:05:04.806 CC test/event/app_repeat/app_repeat.o 00:05:04.806 CC examples/nvmf/nvmf/nvmf.o 00:05:04.806 CXX test/cpp_headers/conf.o 00:05:04.806 CC examples/nvme/hotplug/hotplug.o 00:05:04.806 LINK arbitration 00:05:04.806 LINK app_repeat 00:05:04.806 CXX test/cpp_headers/config.o 00:05:05.066 LINK mem_callbacks 00:05:05.066 CXX test/cpp_headers/cpuset.o 00:05:05.066 CC test/nvme/aer/aer.o 00:05:05.066 LINK hotplug 00:05:05.066 LINK nvmf 00:05:05.066 CC test/rpc_client/rpc_client_test.o 00:05:05.066 LINK pci_ut 00:05:05.066 CXX test/cpp_headers/crc16.o 00:05:05.066 CC test/event/scheduler/scheduler.o 00:05:05.066 CC test/thread/poller_perf/poller_perf.o 00:05:05.325 LINK rpc_client_test 00:05:05.325 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:05.325 CXX test/cpp_headers/crc32.o 00:05:05.325 LINK aer 00:05:05.325 CC examples/util/zipf/zipf.o 00:05:05.325 LINK poller_perf 00:05:05.325 LINK scheduler 00:05:05.325 LINK cmb_copy 00:05:05.583 CXX test/cpp_headers/crc64.o 00:05:05.583 CC examples/thread/thread/thread_ex.o 00:05:05.583 CC examples/idxd/perf/perf.o 00:05:05.583 LINK zipf 00:05:05.583 CC test/nvme/reset/reset.o 00:05:05.583 CC test/nvme/sgl/sgl.o 00:05:05.583 LINK memory_ut 00:05:05.583 CXX test/cpp_headers/dif.o 00:05:05.583 CC examples/nvme/abort/abort.o 00:05:05.583 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:05.583 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:05.840 CXX test/cpp_headers/dma.o 00:05:05.840 LINK thread 00:05:05.840 LINK reset 00:05:05.840 CXX test/cpp_headers/endian.o 00:05:05.840 LINK sgl 00:05:05.840 LINK idxd_perf 00:05:05.840 LINK interrupt_tgt 00:05:05.840 LINK pmr_persistence 00:05:05.840 CC test/nvme/e2edp/nvme_dp.o 00:05:06.099 CXX test/cpp_headers/env_dpdk.o 00:05:06.099 CXX test/cpp_headers/env.o 00:05:06.099 CXX test/cpp_headers/event.o 00:05:06.099 CC test/nvme/overhead/overhead.o 00:05:06.099 CC test/nvme/err_injection/err_injection.o 00:05:06.099 LINK abort 00:05:06.099 CXX test/cpp_headers/fd_group.o 00:05:06.099 CXX test/cpp_headers/fd.o 00:05:06.099 CXX test/cpp_headers/file.o 00:05:06.099 CXX test/cpp_headers/ftl.o 00:05:06.099 CXX test/cpp_headers/gpt_spec.o 00:05:06.099 CC test/nvme/startup/startup.o 00:05:06.099 CXX test/cpp_headers/hexlify.o 00:05:06.099 LINK err_injection 00:05:06.099 CXX test/cpp_headers/histogram_data.o 00:05:06.358 LINK nvme_dp 00:05:06.358 LINK overhead 00:05:06.358 CXX test/cpp_headers/idxd.o 00:05:06.358 CXX test/cpp_headers/idxd_spec.o 00:05:06.358 CXX test/cpp_headers/init.o 00:05:06.358 LINK startup 00:05:06.358 CXX test/cpp_headers/ioat.o 00:05:06.358 CXX test/cpp_headers/ioat_spec.o 00:05:06.358 CC test/nvme/reserve/reserve.o 00:05:06.358 CC test/nvme/simple_copy/simple_copy.o 00:05:06.358 CC test/nvme/connect_stress/connect_stress.o 00:05:06.616 CXX test/cpp_headers/iscsi_spec.o 00:05:06.616 CC test/nvme/boot_partition/boot_partition.o 00:05:06.616 CC test/nvme/compliance/nvme_compliance.o 00:05:06.616 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:06.616 CC test/nvme/fused_ordering/fused_ordering.o 00:05:06.616 LINK reserve 00:05:06.616 CC test/nvme/fdp/fdp.o 00:05:06.616 LINK connect_stress 00:05:06.616 LINK simple_copy 00:05:06.616 CXX test/cpp_headers/json.o 00:05:06.873 LINK boot_partition 00:05:06.873 LINK doorbell_aers 00:05:06.873 LINK fused_ordering 00:05:06.873 CXX test/cpp_headers/jsonrpc.o 00:05:06.873 CC test/nvme/cuse/cuse.o 00:05:06.873 CXX test/cpp_headers/likely.o 00:05:06.873 CXX test/cpp_headers/log.o 00:05:06.873 LINK nvme_compliance 00:05:06.873 CXX test/cpp_headers/lvol.o 00:05:06.873 LINK fdp 00:05:06.873 CXX test/cpp_headers/memory.o 00:05:06.873 CXX test/cpp_headers/mmio.o 00:05:06.873 CXX test/cpp_headers/nbd.o 00:05:07.131 CXX test/cpp_headers/notify.o 00:05:07.131 CXX test/cpp_headers/nvme.o 00:05:07.131 CXX test/cpp_headers/nvme_intel.o 00:05:07.131 CXX test/cpp_headers/nvme_ocssd.o 00:05:07.131 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:07.131 CXX test/cpp_headers/nvme_spec.o 00:05:07.131 CXX test/cpp_headers/nvme_zns.o 00:05:07.131 CXX test/cpp_headers/nvmf_cmd.o 00:05:07.131 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:07.131 CXX test/cpp_headers/nvmf.o 00:05:07.131 CXX test/cpp_headers/nvmf_spec.o 00:05:07.131 CXX test/cpp_headers/nvmf_transport.o 00:05:07.131 CXX test/cpp_headers/opal.o 00:05:07.131 CXX test/cpp_headers/opal_spec.o 00:05:07.388 CXX test/cpp_headers/pci_ids.o 00:05:07.388 CXX test/cpp_headers/pipe.o 00:05:07.388 CXX test/cpp_headers/queue.o 00:05:07.388 CXX test/cpp_headers/reduce.o 00:05:07.388 CXX test/cpp_headers/rpc.o 00:05:07.388 CXX test/cpp_headers/scheduler.o 00:05:07.388 CXX test/cpp_headers/scsi.o 00:05:07.388 CXX test/cpp_headers/scsi_spec.o 00:05:07.388 CXX test/cpp_headers/sock.o 00:05:07.388 CXX test/cpp_headers/stdinc.o 00:05:07.388 CXX test/cpp_headers/string.o 00:05:07.645 CXX test/cpp_headers/thread.o 00:05:07.645 CXX test/cpp_headers/trace.o 00:05:07.645 CXX test/cpp_headers/trace_parser.o 00:05:07.645 CXX test/cpp_headers/tree.o 00:05:07.645 CXX test/cpp_headers/ublk.o 00:05:07.645 CXX test/cpp_headers/util.o 00:05:07.645 CXX test/cpp_headers/uuid.o 00:05:07.645 CXX test/cpp_headers/version.o 00:05:07.645 CXX test/cpp_headers/vfio_user_pci.o 00:05:07.645 CXX test/cpp_headers/vfio_user_spec.o 00:05:07.645 CXX test/cpp_headers/vhost.o 00:05:07.645 CXX test/cpp_headers/vmd.o 00:05:07.903 CXX test/cpp_headers/xor.o 00:05:07.903 CXX test/cpp_headers/zipf.o 00:05:07.903 LINK cuse 00:05:08.841 LINK esnap 00:05:09.100 00:05:09.100 real 0m48.458s 00:05:09.100 user 4m45.250s 00:05:09.100 sys 0m55.221s 00:05:09.100 05:03:58 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:05:09.100 05:03:58 -- common/autotest_common.sh@10 -- $ set +x 00:05:09.100 ************************************ 00:05:09.100 END TEST make 00:05:09.100 ************************************ 00:05:09.100 05:03:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:09.100 05:03:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:09.100 05:03:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:09.359 05:03:58 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:09.359 05:03:58 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:09.359 05:03:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:09.359 05:03:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:09.359 05:03:58 -- scripts/common.sh@335 -- # IFS=.-: 00:05:09.359 05:03:58 -- scripts/common.sh@335 -- # read -ra ver1 00:05:09.359 05:03:58 -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.359 05:03:58 -- scripts/common.sh@336 -- # read -ra ver2 00:05:09.359 05:03:58 -- scripts/common.sh@337 -- # local 'op=<' 00:05:09.359 05:03:58 -- scripts/common.sh@339 -- # ver1_l=2 00:05:09.359 05:03:58 -- scripts/common.sh@340 -- # ver2_l=1 00:05:09.359 05:03:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:09.359 05:03:58 -- scripts/common.sh@343 -- # case "$op" in 00:05:09.359 05:03:58 -- scripts/common.sh@344 -- # : 1 00:05:09.359 05:03:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:09.359 05:03:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.359 05:03:58 -- scripts/common.sh@364 -- # decimal 1 00:05:09.359 05:03:58 -- scripts/common.sh@352 -- # local d=1 00:05:09.359 05:03:58 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.359 05:03:58 -- scripts/common.sh@354 -- # echo 1 00:05:09.359 05:03:58 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:09.359 05:03:58 -- scripts/common.sh@365 -- # decimal 2 00:05:09.359 05:03:58 -- scripts/common.sh@352 -- # local d=2 00:05:09.359 05:03:58 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.359 05:03:58 -- scripts/common.sh@354 -- # echo 2 00:05:09.359 05:03:58 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:09.359 05:03:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:09.360 05:03:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:09.360 05:03:58 -- scripts/common.sh@367 -- # return 0 00:05:09.360 05:03:58 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.360 05:03:58 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:09.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.360 --rc genhtml_branch_coverage=1 00:05:09.360 --rc genhtml_function_coverage=1 00:05:09.360 --rc genhtml_legend=1 00:05:09.360 --rc geninfo_all_blocks=1 00:05:09.360 --rc geninfo_unexecuted_blocks=1 00:05:09.360 00:05:09.360 ' 00:05:09.360 05:03:58 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:09.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.360 --rc genhtml_branch_coverage=1 00:05:09.360 --rc genhtml_function_coverage=1 00:05:09.360 --rc genhtml_legend=1 00:05:09.360 --rc geninfo_all_blocks=1 00:05:09.360 --rc geninfo_unexecuted_blocks=1 00:05:09.360 00:05:09.360 ' 00:05:09.360 05:03:58 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:09.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.360 --rc genhtml_branch_coverage=1 00:05:09.360 --rc genhtml_function_coverage=1 00:05:09.360 --rc genhtml_legend=1 00:05:09.360 --rc geninfo_all_blocks=1 00:05:09.360 --rc geninfo_unexecuted_blocks=1 00:05:09.360 00:05:09.360 ' 00:05:09.360 05:03:58 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:09.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.360 --rc genhtml_branch_coverage=1 00:05:09.360 --rc genhtml_function_coverage=1 00:05:09.360 --rc genhtml_legend=1 00:05:09.360 --rc geninfo_all_blocks=1 00:05:09.360 --rc geninfo_unexecuted_blocks=1 00:05:09.360 00:05:09.360 ' 00:05:09.360 05:03:58 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:09.360 05:03:58 -- nvmf/common.sh@7 -- # uname -s 00:05:09.360 05:03:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:09.360 05:03:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:09.360 05:03:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:09.360 05:03:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:09.360 05:03:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:09.360 05:03:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:09.360 05:03:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:09.360 05:03:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:09.360 05:03:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:09.360 05:03:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:09.360 05:03:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:05:09.360 05:03:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:05:09.360 05:03:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:09.360 05:03:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:09.360 05:03:58 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:05:09.360 05:03:58 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:09.360 05:03:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:09.360 05:03:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:09.360 05:03:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:09.360 05:03:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.360 05:03:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.360 05:03:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.360 05:03:58 -- paths/export.sh@5 -- # export PATH 00:05:09.360 05:03:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.360 05:03:58 -- nvmf/common.sh@46 -- # : 0 00:05:09.360 05:03:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:09.360 05:03:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:09.360 05:03:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:09.360 05:03:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:09.360 05:03:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:09.360 05:03:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:09.360 05:03:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:09.360 05:03:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:09.360 05:03:58 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:09.360 05:03:58 -- spdk/autotest.sh@32 -- # uname -s 00:05:09.360 05:03:59 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:09.360 05:03:59 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:09.360 05:03:59 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:09.360 05:03:59 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:09.360 05:03:59 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:09.360 05:03:59 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:09.360 05:03:59 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:09.360 05:03:59 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:09.360 05:03:59 -- spdk/autotest.sh@48 -- # udevadm_pid=60090 00:05:09.360 05:03:59 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:05:09.360 05:03:59 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:09.360 05:03:59 -- spdk/autotest.sh@54 -- # echo 60105 00:05:09.360 05:03:59 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:05:09.360 05:03:59 -- spdk/autotest.sh@56 -- # echo 60108 00:05:09.360 05:03:59 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:05:09.360 05:03:59 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:05:09.360 05:03:59 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:09.360 05:03:59 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:05:09.360 05:03:59 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:09.360 05:03:59 -- common/autotest_common.sh@10 -- # set +x 00:05:09.360 05:03:59 -- spdk/autotest.sh@70 -- # create_test_list 00:05:09.360 05:03:59 -- common/autotest_common.sh@746 -- # xtrace_disable 00:05:09.360 05:03:59 -- common/autotest_common.sh@10 -- # set +x 00:05:09.360 05:03:59 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:09.360 05:03:59 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:09.360 05:03:59 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:05:09.360 05:03:59 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:09.360 05:03:59 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:05:09.360 05:03:59 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:05:09.360 05:03:59 -- common/autotest_common.sh@1450 -- # uname 00:05:09.360 05:03:59 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:05:09.360 05:03:59 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:05:09.361 05:03:59 -- common/autotest_common.sh@1470 -- # uname 00:05:09.361 05:03:59 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:05:09.361 05:03:59 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:05:09.361 05:03:59 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:09.619 lcov: LCOV version 1.15 00:05:09.619 05:03:59 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:17.771 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:05:17.771 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:05:17.771 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:05:17.771 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:05:17.771 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:05:17.771 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:05:39.704 05:04:26 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:05:39.704 05:04:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:39.704 05:04:26 -- common/autotest_common.sh@10 -- # set +x 00:05:39.704 05:04:26 -- spdk/autotest.sh@89 -- # rm -f 00:05:39.704 05:04:26 -- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:39.704 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:39.704 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:05:39.704 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:05:39.704 05:04:26 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:05:39.704 05:04:26 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:05:39.704 05:04:26 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:05:39.704 05:04:26 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:05:39.704 05:04:26 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:39.705 05:04:26 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:05:39.705 05:04:26 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:05:39.705 05:04:26 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:39.705 05:04:26 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:39.705 05:04:26 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:39.705 05:04:26 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:05:39.705 05:04:26 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:05:39.705 05:04:26 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:39.705 05:04:26 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:39.705 05:04:26 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:39.705 05:04:26 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:05:39.705 05:04:26 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:05:39.705 05:04:26 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:39.705 05:04:26 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:39.705 05:04:26 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:39.705 05:04:26 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:05:39.705 05:04:26 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:05:39.705 05:04:26 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:39.705 05:04:26 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:39.705 05:04:26 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:05:39.705 05:04:26 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 /dev/nvme1n3 00:05:39.705 05:04:26 -- spdk/autotest.sh@108 -- # grep -v p 00:05:39.705 05:04:26 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:39.705 05:04:26 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:05:39.705 05:04:26 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:05:39.705 05:04:26 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:05:39.705 05:04:26 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:39.705 No valid GPT data, bailing 00:05:39.705 05:04:26 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:39.705 05:04:26 -- scripts/common.sh@393 -- # pt= 00:05:39.705 05:04:26 -- scripts/common.sh@394 -- # return 1 00:05:39.705 05:04:26 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:39.705 1+0 records in 00:05:39.705 1+0 records out 00:05:39.705 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0046009 s, 228 MB/s 00:05:39.705 05:04:26 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:39.705 05:04:26 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:05:39.705 05:04:26 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n1 00:05:39.705 05:04:26 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:05:39.705 05:04:26 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:39.705 No valid GPT data, bailing 00:05:39.705 05:04:26 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:39.705 05:04:26 -- scripts/common.sh@393 -- # pt= 00:05:39.705 05:04:26 -- scripts/common.sh@394 -- # return 1 00:05:39.705 05:04:26 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:39.705 1+0 records in 00:05:39.705 1+0 records out 00:05:39.705 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00353381 s, 297 MB/s 00:05:39.705 05:04:26 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:39.705 05:04:26 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:05:39.705 05:04:26 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n2 00:05:39.705 05:04:26 -- scripts/common.sh@380 -- # local block=/dev/nvme1n2 pt 00:05:39.705 05:04:26 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:39.705 No valid GPT data, bailing 00:05:39.705 05:04:27 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:39.705 05:04:27 -- scripts/common.sh@393 -- # pt= 00:05:39.705 05:04:27 -- scripts/common.sh@394 -- # return 1 00:05:39.705 05:04:27 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:39.705 1+0 records in 00:05:39.705 1+0 records out 00:05:39.705 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00432582 s, 242 MB/s 00:05:39.705 05:04:27 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:39.705 05:04:27 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:05:39.705 05:04:27 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n3 00:05:39.705 05:04:27 -- scripts/common.sh@380 -- # local block=/dev/nvme1n3 pt 00:05:39.705 05:04:27 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:39.705 No valid GPT data, bailing 00:05:39.705 05:04:27 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:39.705 05:04:27 -- scripts/common.sh@393 -- # pt= 00:05:39.705 05:04:27 -- scripts/common.sh@394 -- # return 1 00:05:39.705 05:04:27 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:39.705 1+0 records in 00:05:39.705 1+0 records out 00:05:39.705 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00343689 s, 305 MB/s 00:05:39.705 05:04:27 -- spdk/autotest.sh@116 -- # sync 00:05:39.705 05:04:27 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:39.705 05:04:27 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:39.705 05:04:27 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:39.705 05:04:29 -- spdk/autotest.sh@122 -- # uname -s 00:05:39.705 05:04:29 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:05:39.705 05:04:29 -- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:39.705 05:04:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:39.705 05:04:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:39.705 05:04:29 -- common/autotest_common.sh@10 -- # set +x 00:05:39.705 ************************************ 00:05:39.705 START TEST setup.sh 00:05:39.705 ************************************ 00:05:39.705 05:04:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:39.965 * Looking for test storage... 00:05:39.965 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:39.965 05:04:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:39.965 05:04:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:39.965 05:04:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:39.965 05:04:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:39.965 05:04:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:39.965 05:04:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:39.965 05:04:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:39.965 05:04:29 -- scripts/common.sh@335 -- # IFS=.-: 00:05:39.965 05:04:29 -- scripts/common.sh@335 -- # read -ra ver1 00:05:39.965 05:04:29 -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.965 05:04:29 -- scripts/common.sh@336 -- # read -ra ver2 00:05:39.965 05:04:29 -- scripts/common.sh@337 -- # local 'op=<' 00:05:39.965 05:04:29 -- scripts/common.sh@339 -- # ver1_l=2 00:05:39.965 05:04:29 -- scripts/common.sh@340 -- # ver2_l=1 00:05:39.965 05:04:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:39.965 05:04:29 -- scripts/common.sh@343 -- # case "$op" in 00:05:39.965 05:04:29 -- scripts/common.sh@344 -- # : 1 00:05:39.965 05:04:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:39.965 05:04:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.965 05:04:29 -- scripts/common.sh@364 -- # decimal 1 00:05:39.965 05:04:29 -- scripts/common.sh@352 -- # local d=1 00:05:39.965 05:04:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.965 05:04:29 -- scripts/common.sh@354 -- # echo 1 00:05:39.965 05:04:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:39.965 05:04:29 -- scripts/common.sh@365 -- # decimal 2 00:05:39.965 05:04:29 -- scripts/common.sh@352 -- # local d=2 00:05:39.965 05:04:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.965 05:04:29 -- scripts/common.sh@354 -- # echo 2 00:05:39.965 05:04:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:39.965 05:04:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:39.965 05:04:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:39.965 05:04:29 -- scripts/common.sh@367 -- # return 0 00:05:39.965 05:04:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.965 05:04:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:39.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.965 --rc genhtml_branch_coverage=1 00:05:39.965 --rc genhtml_function_coverage=1 00:05:39.965 --rc genhtml_legend=1 00:05:39.965 --rc geninfo_all_blocks=1 00:05:39.965 --rc geninfo_unexecuted_blocks=1 00:05:39.965 00:05:39.965 ' 00:05:39.965 05:04:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:39.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.965 --rc genhtml_branch_coverage=1 00:05:39.965 --rc genhtml_function_coverage=1 00:05:39.965 --rc genhtml_legend=1 00:05:39.965 --rc geninfo_all_blocks=1 00:05:39.965 --rc geninfo_unexecuted_blocks=1 00:05:39.965 00:05:39.965 ' 00:05:39.965 05:04:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:39.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.965 --rc genhtml_branch_coverage=1 00:05:39.965 --rc genhtml_function_coverage=1 00:05:39.965 --rc genhtml_legend=1 00:05:39.965 --rc geninfo_all_blocks=1 00:05:39.965 --rc geninfo_unexecuted_blocks=1 00:05:39.965 00:05:39.965 ' 00:05:39.965 05:04:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:39.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.965 --rc genhtml_branch_coverage=1 00:05:39.965 --rc genhtml_function_coverage=1 00:05:39.965 --rc genhtml_legend=1 00:05:39.965 --rc geninfo_all_blocks=1 00:05:39.965 --rc geninfo_unexecuted_blocks=1 00:05:39.965 00:05:39.965 ' 00:05:39.965 05:04:29 -- setup/test-setup.sh@10 -- # uname -s 00:05:39.965 05:04:29 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:39.965 05:04:29 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:39.965 05:04:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:39.965 05:04:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:39.965 05:04:29 -- common/autotest_common.sh@10 -- # set +x 00:05:39.965 ************************************ 00:05:39.965 START TEST acl 00:05:39.965 ************************************ 00:05:39.965 05:04:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:39.965 * Looking for test storage... 00:05:39.965 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:39.965 05:04:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:39.965 05:04:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:39.965 05:04:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:40.224 05:04:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:40.224 05:04:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:40.224 05:04:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:40.225 05:04:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:40.225 05:04:29 -- scripts/common.sh@335 -- # IFS=.-: 00:05:40.225 05:04:29 -- scripts/common.sh@335 -- # read -ra ver1 00:05:40.225 05:04:29 -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.225 05:04:29 -- scripts/common.sh@336 -- # read -ra ver2 00:05:40.225 05:04:29 -- scripts/common.sh@337 -- # local 'op=<' 00:05:40.225 05:04:29 -- scripts/common.sh@339 -- # ver1_l=2 00:05:40.225 05:04:29 -- scripts/common.sh@340 -- # ver2_l=1 00:05:40.225 05:04:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:40.225 05:04:29 -- scripts/common.sh@343 -- # case "$op" in 00:05:40.225 05:04:29 -- scripts/common.sh@344 -- # : 1 00:05:40.225 05:04:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:40.225 05:04:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.225 05:04:29 -- scripts/common.sh@364 -- # decimal 1 00:05:40.225 05:04:29 -- scripts/common.sh@352 -- # local d=1 00:05:40.225 05:04:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.225 05:04:29 -- scripts/common.sh@354 -- # echo 1 00:05:40.225 05:04:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:40.225 05:04:29 -- scripts/common.sh@365 -- # decimal 2 00:05:40.225 05:04:29 -- scripts/common.sh@352 -- # local d=2 00:05:40.225 05:04:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.225 05:04:29 -- scripts/common.sh@354 -- # echo 2 00:05:40.225 05:04:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:40.225 05:04:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:40.225 05:04:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:40.225 05:04:29 -- scripts/common.sh@367 -- # return 0 00:05:40.225 05:04:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.225 05:04:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:40.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.225 --rc genhtml_branch_coverage=1 00:05:40.225 --rc genhtml_function_coverage=1 00:05:40.225 --rc genhtml_legend=1 00:05:40.225 --rc geninfo_all_blocks=1 00:05:40.225 --rc geninfo_unexecuted_blocks=1 00:05:40.225 00:05:40.225 ' 00:05:40.225 05:04:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:40.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.225 --rc genhtml_branch_coverage=1 00:05:40.225 --rc genhtml_function_coverage=1 00:05:40.225 --rc genhtml_legend=1 00:05:40.225 --rc geninfo_all_blocks=1 00:05:40.225 --rc geninfo_unexecuted_blocks=1 00:05:40.225 00:05:40.225 ' 00:05:40.225 05:04:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:40.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.225 --rc genhtml_branch_coverage=1 00:05:40.225 --rc genhtml_function_coverage=1 00:05:40.225 --rc genhtml_legend=1 00:05:40.225 --rc geninfo_all_blocks=1 00:05:40.225 --rc geninfo_unexecuted_blocks=1 00:05:40.225 00:05:40.225 ' 00:05:40.225 05:04:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:40.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.225 --rc genhtml_branch_coverage=1 00:05:40.225 --rc genhtml_function_coverage=1 00:05:40.225 --rc genhtml_legend=1 00:05:40.225 --rc geninfo_all_blocks=1 00:05:40.225 --rc geninfo_unexecuted_blocks=1 00:05:40.225 00:05:40.225 ' 00:05:40.225 05:04:29 -- setup/acl.sh@10 -- # get_zoned_devs 00:05:40.225 05:04:29 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:05:40.225 05:04:29 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:05:40.225 05:04:29 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:05:40.225 05:04:29 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:40.225 05:04:29 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:05:40.225 05:04:29 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:05:40.225 05:04:29 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:40.225 05:04:29 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:40.225 05:04:29 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:40.225 05:04:29 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:05:40.225 05:04:29 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:05:40.225 05:04:29 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:40.225 05:04:29 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:40.225 05:04:29 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:40.225 05:04:29 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:05:40.225 05:04:29 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:05:40.225 05:04:29 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:40.225 05:04:29 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:40.225 05:04:29 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:40.225 05:04:29 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:05:40.225 05:04:29 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:05:40.225 05:04:29 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:40.225 05:04:29 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:40.225 05:04:29 -- setup/acl.sh@12 -- # devs=() 00:05:40.225 05:04:29 -- setup/acl.sh@12 -- # declare -a devs 00:05:40.225 05:04:29 -- setup/acl.sh@13 -- # drivers=() 00:05:40.225 05:04:29 -- setup/acl.sh@13 -- # declare -A drivers 00:05:40.225 05:04:29 -- setup/acl.sh@51 -- # setup reset 00:05:40.225 05:04:29 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:40.225 05:04:29 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:41.164 05:04:30 -- setup/acl.sh@52 -- # collect_setup_devs 00:05:41.164 05:04:30 -- setup/acl.sh@16 -- # local dev driver 00:05:41.164 05:04:30 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:41.164 05:04:30 -- setup/acl.sh@15 -- # setup output status 00:05:41.164 05:04:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:41.164 05:04:30 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:41.164 Hugepages 00:05:41.164 node hugesize free / total 00:05:41.164 05:04:30 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:41.164 05:04:30 -- setup/acl.sh@19 -- # continue 00:05:41.164 05:04:30 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:41.164 00:05:41.164 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:41.164 05:04:30 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:41.164 05:04:30 -- setup/acl.sh@19 -- # continue 00:05:41.164 05:04:30 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:41.164 05:04:30 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:41.164 05:04:30 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:41.164 05:04:30 -- setup/acl.sh@20 -- # continue 00:05:41.164 05:04:30 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:41.164 05:04:30 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:05:41.164 05:04:30 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:41.164 05:04:30 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:41.164 05:04:30 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:41.164 05:04:30 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:41.164 05:04:30 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:41.423 05:04:30 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:05:41.424 05:04:30 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:41.424 05:04:30 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:41.424 05:04:30 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:41.424 05:04:30 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:41.424 05:04:30 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:41.424 05:04:30 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:05:41.424 05:04:30 -- setup/acl.sh@54 -- # run_test denied denied 00:05:41.424 05:04:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:41.424 05:04:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:41.424 05:04:30 -- common/autotest_common.sh@10 -- # set +x 00:05:41.424 ************************************ 00:05:41.424 START TEST denied 00:05:41.424 ************************************ 00:05:41.424 05:04:30 -- common/autotest_common.sh@1114 -- # denied 00:05:41.424 05:04:30 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:05:41.424 05:04:30 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:05:41.424 05:04:30 -- setup/acl.sh@38 -- # setup output config 00:05:41.424 05:04:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:41.424 05:04:30 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:42.360 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:05:42.360 05:04:31 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:05:42.360 05:04:31 -- setup/acl.sh@28 -- # local dev driver 00:05:42.360 05:04:31 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:42.360 05:04:31 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:05:42.360 05:04:31 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:05:42.360 05:04:31 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:42.360 05:04:31 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:42.360 05:04:31 -- setup/acl.sh@41 -- # setup reset 00:05:42.361 05:04:31 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:42.361 05:04:31 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:42.928 ************************************ 00:05:42.928 END TEST denied 00:05:42.928 ************************************ 00:05:42.928 00:05:42.928 real 0m1.501s 00:05:42.928 user 0m0.615s 00:05:42.928 sys 0m0.848s 00:05:42.928 05:04:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:42.928 05:04:32 -- common/autotest_common.sh@10 -- # set +x 00:05:42.928 05:04:32 -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:42.928 05:04:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:42.928 05:04:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:42.928 05:04:32 -- common/autotest_common.sh@10 -- # set +x 00:05:42.928 ************************************ 00:05:42.928 START TEST allowed 00:05:42.928 ************************************ 00:05:42.928 05:04:32 -- common/autotest_common.sh@1114 -- # allowed 00:05:42.928 05:04:32 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:05:42.928 05:04:32 -- setup/acl.sh@45 -- # setup output config 00:05:42.928 05:04:32 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:42.928 05:04:32 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:05:42.928 05:04:32 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:43.862 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:43.862 05:04:33 -- setup/acl.sh@47 -- # verify 0000:00:07.0 00:05:43.862 05:04:33 -- setup/acl.sh@28 -- # local dev driver 00:05:43.862 05:04:33 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:43.862 05:04:33 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:05:43.863 05:04:33 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:05:43.863 05:04:33 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:43.863 05:04:33 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:43.863 05:04:33 -- setup/acl.sh@48 -- # setup reset 00:05:43.863 05:04:33 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:43.863 05:04:33 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:44.439 00:05:44.439 real 0m1.558s 00:05:44.439 user 0m0.725s 00:05:44.439 sys 0m0.841s 00:05:44.439 05:04:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:44.439 05:04:34 -- common/autotest_common.sh@10 -- # set +x 00:05:44.439 ************************************ 00:05:44.439 END TEST allowed 00:05:44.439 ************************************ 00:05:44.439 ************************************ 00:05:44.439 END TEST acl 00:05:44.439 ************************************ 00:05:44.439 00:05:44.439 real 0m4.476s 00:05:44.439 user 0m1.987s 00:05:44.439 sys 0m2.484s 00:05:44.439 05:04:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:44.439 05:04:34 -- common/autotest_common.sh@10 -- # set +x 00:05:44.439 05:04:34 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:44.439 05:04:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:44.439 05:04:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:44.439 05:04:34 -- common/autotest_common.sh@10 -- # set +x 00:05:44.439 ************************************ 00:05:44.439 START TEST hugepages 00:05:44.439 ************************************ 00:05:44.439 05:04:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:44.698 * Looking for test storage... 00:05:44.698 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:44.698 05:04:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:44.698 05:04:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:44.698 05:04:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:44.698 05:04:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:44.698 05:04:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:44.698 05:04:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:44.698 05:04:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:44.698 05:04:34 -- scripts/common.sh@335 -- # IFS=.-: 00:05:44.698 05:04:34 -- scripts/common.sh@335 -- # read -ra ver1 00:05:44.698 05:04:34 -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.698 05:04:34 -- scripts/common.sh@336 -- # read -ra ver2 00:05:44.698 05:04:34 -- scripts/common.sh@337 -- # local 'op=<' 00:05:44.698 05:04:34 -- scripts/common.sh@339 -- # ver1_l=2 00:05:44.698 05:04:34 -- scripts/common.sh@340 -- # ver2_l=1 00:05:44.698 05:04:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:44.698 05:04:34 -- scripts/common.sh@343 -- # case "$op" in 00:05:44.698 05:04:34 -- scripts/common.sh@344 -- # : 1 00:05:44.698 05:04:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:44.698 05:04:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.698 05:04:34 -- scripts/common.sh@364 -- # decimal 1 00:05:44.698 05:04:34 -- scripts/common.sh@352 -- # local d=1 00:05:44.698 05:04:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.698 05:04:34 -- scripts/common.sh@354 -- # echo 1 00:05:44.698 05:04:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:44.698 05:04:34 -- scripts/common.sh@365 -- # decimal 2 00:05:44.698 05:04:34 -- scripts/common.sh@352 -- # local d=2 00:05:44.698 05:04:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.698 05:04:34 -- scripts/common.sh@354 -- # echo 2 00:05:44.698 05:04:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:44.698 05:04:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:44.698 05:04:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:44.698 05:04:34 -- scripts/common.sh@367 -- # return 0 00:05:44.698 05:04:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.698 05:04:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:44.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.698 --rc genhtml_branch_coverage=1 00:05:44.698 --rc genhtml_function_coverage=1 00:05:44.698 --rc genhtml_legend=1 00:05:44.698 --rc geninfo_all_blocks=1 00:05:44.698 --rc geninfo_unexecuted_blocks=1 00:05:44.698 00:05:44.698 ' 00:05:44.698 05:04:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:44.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.698 --rc genhtml_branch_coverage=1 00:05:44.698 --rc genhtml_function_coverage=1 00:05:44.698 --rc genhtml_legend=1 00:05:44.698 --rc geninfo_all_blocks=1 00:05:44.698 --rc geninfo_unexecuted_blocks=1 00:05:44.698 00:05:44.698 ' 00:05:44.698 05:04:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:44.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.698 --rc genhtml_branch_coverage=1 00:05:44.698 --rc genhtml_function_coverage=1 00:05:44.698 --rc genhtml_legend=1 00:05:44.698 --rc geninfo_all_blocks=1 00:05:44.698 --rc geninfo_unexecuted_blocks=1 00:05:44.698 00:05:44.698 ' 00:05:44.698 05:04:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:44.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.698 --rc genhtml_branch_coverage=1 00:05:44.698 --rc genhtml_function_coverage=1 00:05:44.698 --rc genhtml_legend=1 00:05:44.698 --rc geninfo_all_blocks=1 00:05:44.698 --rc geninfo_unexecuted_blocks=1 00:05:44.698 00:05:44.698 ' 00:05:44.698 05:04:34 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:44.698 05:04:34 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:44.698 05:04:34 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:44.698 05:04:34 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:44.698 05:04:34 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:44.698 05:04:34 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:44.698 05:04:34 -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:44.698 05:04:34 -- setup/common.sh@18 -- # local node= 00:05:44.698 05:04:34 -- setup/common.sh@19 -- # local var val 00:05:44.698 05:04:34 -- setup/common.sh@20 -- # local mem_f mem 00:05:44.698 05:04:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:44.698 05:04:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:44.698 05:04:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:44.698 05:04:34 -- setup/common.sh@28 -- # mapfile -t mem 00:05:44.698 05:04:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:44.698 05:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.698 05:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.698 05:04:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 4544008 kB' 'MemAvailable: 7337420 kB' 'Buffers: 2684 kB' 'Cached: 2996928 kB' 'SwapCached: 0 kB' 'Active: 455116 kB' 'Inactive: 2661240 kB' 'Active(anon): 127256 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 118360 kB' 'Mapped: 50564 kB' 'Shmem: 10512 kB' 'KReclaimable: 82928 kB' 'Slab: 183204 kB' 'SReclaimable: 82928 kB' 'SUnreclaim: 100276 kB' 'KernelStack: 6752 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12411004 kB' 'Committed_AS: 319720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:44.698 05:04:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:44.698 05:04:34 -- setup/common.sh@32 -- # continue 00:05:44.698 05:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # continue 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # continue 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # continue 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # continue 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # continue 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # continue 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # continue 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # continue 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # continue 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # continue 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # continue 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # continue 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # continue 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # continue 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # continue 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # continue 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # continue 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # continue 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # continue 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # continue 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # continue 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # continue 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # continue 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # continue 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # continue 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # continue 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # continue 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # continue 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # continue 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # continue 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # continue 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # continue 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # continue 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # continue 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # continue 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # continue 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # continue 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # continue 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # continue 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # continue 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:44.699 05:04:34 -- setup/common.sh@32 -- # continue 00:05:44.699 05:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.700 05:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.700 05:04:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:44.700 05:04:34 -- setup/common.sh@32 -- # continue 00:05:44.700 05:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.700 05:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.700 05:04:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:44.700 05:04:34 -- setup/common.sh@32 -- # continue 00:05:44.700 05:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.700 05:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.700 05:04:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:44.700 05:04:34 -- setup/common.sh@32 -- # continue 00:05:44.700 05:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.700 05:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.700 05:04:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:44.700 05:04:34 -- setup/common.sh@32 -- # continue 00:05:44.700 05:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.700 05:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.700 05:04:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:44.700 05:04:34 -- setup/common.sh@32 -- # continue 00:05:44.700 05:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.700 05:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.700 05:04:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:44.700 05:04:34 -- setup/common.sh@32 -- # continue 00:05:44.700 05:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.700 05:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.700 05:04:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:44.700 05:04:34 -- setup/common.sh@32 -- # continue 00:05:44.700 05:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.700 05:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.700 05:04:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:44.700 05:04:34 -- setup/common.sh@32 -- # continue 00:05:44.700 05:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.700 05:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.700 05:04:34 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:44.700 05:04:34 -- setup/common.sh@32 -- # continue 00:05:44.700 05:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.700 05:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.700 05:04:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:44.700 05:04:34 -- setup/common.sh@32 -- # continue 00:05:44.700 05:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.700 05:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.700 05:04:34 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:44.700 05:04:34 -- setup/common.sh@33 -- # echo 2048 00:05:44.700 05:04:34 -- setup/common.sh@33 -- # return 0 00:05:44.700 05:04:34 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:44.700 05:04:34 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:44.700 05:04:34 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:44.700 05:04:34 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:44.700 05:04:34 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:44.700 05:04:34 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:44.700 05:04:34 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:44.700 05:04:34 -- setup/hugepages.sh@207 -- # get_nodes 00:05:44.700 05:04:34 -- setup/hugepages.sh@27 -- # local node 00:05:44.700 05:04:34 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:44.700 05:04:34 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:44.700 05:04:34 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:44.700 05:04:34 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:44.700 05:04:34 -- setup/hugepages.sh@208 -- # clear_hp 00:05:44.700 05:04:34 -- setup/hugepages.sh@37 -- # local node hp 00:05:44.700 05:04:34 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:44.700 05:04:34 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:44.700 05:04:34 -- setup/hugepages.sh@41 -- # echo 0 00:05:44.700 05:04:34 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:44.700 05:04:34 -- setup/hugepages.sh@41 -- # echo 0 00:05:44.700 05:04:34 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:44.700 05:04:34 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:44.700 05:04:34 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:44.700 05:04:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:44.700 05:04:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:44.700 05:04:34 -- common/autotest_common.sh@10 -- # set +x 00:05:44.700 ************************************ 00:05:44.700 START TEST default_setup 00:05:44.700 ************************************ 00:05:44.700 05:04:34 -- common/autotest_common.sh@1114 -- # default_setup 00:05:44.700 05:04:34 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:44.700 05:04:34 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:44.700 05:04:34 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:44.700 05:04:34 -- setup/hugepages.sh@51 -- # shift 00:05:44.700 05:04:34 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:44.700 05:04:34 -- setup/hugepages.sh@52 -- # local node_ids 00:05:44.700 05:04:34 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:44.700 05:04:34 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:44.700 05:04:34 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:44.700 05:04:34 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:44.700 05:04:34 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:44.700 05:04:34 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:44.700 05:04:34 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:44.700 05:04:34 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:44.700 05:04:34 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:44.700 05:04:34 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:44.700 05:04:34 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:44.700 05:04:34 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:44.700 05:04:34 -- setup/hugepages.sh@73 -- # return 0 00:05:44.700 05:04:34 -- setup/hugepages.sh@137 -- # setup output 00:05:44.700 05:04:34 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:44.700 05:04:34 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:45.636 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:45.636 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:45.636 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:45.636 05:04:35 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:45.636 05:04:35 -- setup/hugepages.sh@89 -- # local node 00:05:45.636 05:04:35 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:45.636 05:04:35 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:45.636 05:04:35 -- setup/hugepages.sh@92 -- # local surp 00:05:45.636 05:04:35 -- setup/hugepages.sh@93 -- # local resv 00:05:45.636 05:04:35 -- setup/hugepages.sh@94 -- # local anon 00:05:45.636 05:04:35 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:45.636 05:04:35 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:45.636 05:04:35 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:45.636 05:04:35 -- setup/common.sh@18 -- # local node= 00:05:45.636 05:04:35 -- setup/common.sh@19 -- # local var val 00:05:45.636 05:04:35 -- setup/common.sh@20 -- # local mem_f mem 00:05:45.636 05:04:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:45.636 05:04:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:45.636 05:04:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:45.636 05:04:35 -- setup/common.sh@28 -- # mapfile -t mem 00:05:45.636 05:04:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.636 05:04:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6643804 kB' 'MemAvailable: 9437044 kB' 'Buffers: 2684 kB' 'Cached: 2996924 kB' 'SwapCached: 0 kB' 'Active: 456900 kB' 'Inactive: 2661252 kB' 'Active(anon): 129040 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661252 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 120120 kB' 'Mapped: 50688 kB' 'Shmem: 10492 kB' 'KReclaimable: 82556 kB' 'Slab: 182896 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100340 kB' 'KernelStack: 6720 kB' 'PageTables: 4344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 320832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.636 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.636 05:04:35 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.637 05:04:35 -- setup/common.sh@33 -- # echo 0 00:05:45.637 05:04:35 -- setup/common.sh@33 -- # return 0 00:05:45.637 05:04:35 -- setup/hugepages.sh@97 -- # anon=0 00:05:45.637 05:04:35 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:45.637 05:04:35 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:45.637 05:04:35 -- setup/common.sh@18 -- # local node= 00:05:45.637 05:04:35 -- setup/common.sh@19 -- # local var val 00:05:45.637 05:04:35 -- setup/common.sh@20 -- # local mem_f mem 00:05:45.637 05:04:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:45.637 05:04:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:45.637 05:04:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:45.637 05:04:35 -- setup/common.sh@28 -- # mapfile -t mem 00:05:45.637 05:04:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.637 05:04:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6643804 kB' 'MemAvailable: 9437044 kB' 'Buffers: 2684 kB' 'Cached: 2996920 kB' 'SwapCached: 0 kB' 'Active: 456656 kB' 'Inactive: 2661252 kB' 'Active(anon): 128796 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661252 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119984 kB' 'Mapped: 50568 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 182896 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100340 kB' 'KernelStack: 6768 kB' 'PageTables: 4484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 320832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55464 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.637 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.637 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.638 05:04:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.638 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.638 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.638 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.638 05:04:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.638 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.638 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.638 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.638 05:04:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.638 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.638 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.638 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.638 05:04:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.638 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.638 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.638 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.638 05:04:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.638 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.638 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.638 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.638 05:04:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.638 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.638 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.638 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.638 05:04:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.638 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.638 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.638 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.638 05:04:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.638 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.638 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.638 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.638 05:04:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.638 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.638 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.638 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.638 05:04:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.638 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.638 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.638 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.638 05:04:35 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.638 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.638 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.638 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.638 05:04:35 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.638 05:04:35 -- setup/common.sh@33 -- # echo 0 00:05:45.638 05:04:35 -- setup/common.sh@33 -- # return 0 00:05:45.638 05:04:35 -- setup/hugepages.sh@99 -- # surp=0 00:05:45.638 05:04:35 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:45.638 05:04:35 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:45.638 05:04:35 -- setup/common.sh@18 -- # local node= 00:05:45.638 05:04:35 -- setup/common.sh@19 -- # local var val 00:05:45.638 05:04:35 -- setup/common.sh@20 -- # local mem_f mem 00:05:45.638 05:04:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:45.638 05:04:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:45.638 05:04:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:45.638 05:04:35 -- setup/common.sh@28 -- # mapfile -t mem 00:05:45.638 05:04:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:45.638 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.638 05:04:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6643804 kB' 'MemAvailable: 9437044 kB' 'Buffers: 2684 kB' 'Cached: 2996920 kB' 'SwapCached: 0 kB' 'Active: 456612 kB' 'Inactive: 2661252 kB' 'Active(anon): 128752 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661252 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119872 kB' 'Mapped: 50568 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 182896 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100340 kB' 'KernelStack: 6736 kB' 'PageTables: 4376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 320832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55464 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:45.638 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.638 05:04:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.638 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.638 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.638 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.638 05:04:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.638 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.638 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.638 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.638 05:04:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.638 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.638 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.638 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.638 05:04:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.638 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.638 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.638 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.638 05:04:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.638 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.638 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.638 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.638 05:04:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.638 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.638 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.638 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.638 05:04:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.638 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.638 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.638 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.638 05:04:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.638 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.638 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.638 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.638 05:04:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.638 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.638 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.638 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.638 05:04:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.638 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.638 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.638 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.638 05:04:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.638 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.638 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.638 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.638 05:04:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.638 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.638 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.638 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.638 05:04:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.638 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.638 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.638 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.638 05:04:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.638 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.638 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.638 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.638 05:04:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.638 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.638 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.638 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.638 05:04:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.638 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.638 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.638 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.638 05:04:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.638 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.638 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.638 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.638 05:04:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.638 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.638 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.638 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.897 05:04:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.897 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.897 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.897 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.897 05:04:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.897 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.897 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.897 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.897 05:04:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.897 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.897 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.897 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.897 05:04:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.897 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.897 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.897 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.897 05:04:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.897 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.897 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.897 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.897 05:04:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.897 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.897 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.897 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.897 05:04:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.897 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.897 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.897 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.897 05:04:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.897 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.897 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.897 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.897 05:04:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.897 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.897 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.897 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.897 05:04:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.897 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.897 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.897 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.897 05:04:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.897 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.897 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.897 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.897 05:04:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.897 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.897 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.897 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.897 05:04:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.897 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.897 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.897 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.897 05:04:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.897 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.897 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.897 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.897 05:04:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.897 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.897 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.897 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.897 05:04:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.897 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.897 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.897 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.897 05:04:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.897 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.897 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.897 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.897 05:04:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.897 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.897 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.897 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.897 05:04:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.897 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.897 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.897 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.897 05:04:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.897 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.897 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.897 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.897 05:04:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.897 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.897 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.897 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.897 05:04:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.897 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.897 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.897 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.897 05:04:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.897 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.897 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.897 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.897 05:04:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.897 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.897 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.897 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.897 05:04:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.897 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.897 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.897 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.897 05:04:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.897 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.897 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.897 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.897 05:04:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.898 05:04:35 -- setup/common.sh@33 -- # echo 0 00:05:45.898 05:04:35 -- setup/common.sh@33 -- # return 0 00:05:45.898 nr_hugepages=1024 00:05:45.898 resv_hugepages=0 00:05:45.898 surplus_hugepages=0 00:05:45.898 anon_hugepages=0 00:05:45.898 05:04:35 -- setup/hugepages.sh@100 -- # resv=0 00:05:45.898 05:04:35 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:45.898 05:04:35 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:45.898 05:04:35 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:45.898 05:04:35 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:45.898 05:04:35 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:45.898 05:04:35 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:45.898 05:04:35 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:45.898 05:04:35 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:45.898 05:04:35 -- setup/common.sh@18 -- # local node= 00:05:45.898 05:04:35 -- setup/common.sh@19 -- # local var val 00:05:45.898 05:04:35 -- setup/common.sh@20 -- # local mem_f mem 00:05:45.898 05:04:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:45.898 05:04:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:45.898 05:04:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:45.898 05:04:35 -- setup/common.sh@28 -- # mapfile -t mem 00:05:45.898 05:04:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.898 05:04:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6643804 kB' 'MemAvailable: 9437048 kB' 'Buffers: 2684 kB' 'Cached: 2996920 kB' 'SwapCached: 0 kB' 'Active: 456840 kB' 'Inactive: 2661256 kB' 'Active(anon): 128980 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661256 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119832 kB' 'Mapped: 50568 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 182900 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100344 kB' 'KernelStack: 6752 kB' 'PageTables: 4424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 320832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55480 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.898 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.898 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.899 05:04:35 -- setup/common.sh@33 -- # echo 1024 00:05:45.899 05:04:35 -- setup/common.sh@33 -- # return 0 00:05:45.899 05:04:35 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:45.899 05:04:35 -- setup/hugepages.sh@112 -- # get_nodes 00:05:45.899 05:04:35 -- setup/hugepages.sh@27 -- # local node 00:05:45.899 05:04:35 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:45.899 05:04:35 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:45.899 05:04:35 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:45.899 05:04:35 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:45.899 05:04:35 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:45.899 05:04:35 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:45.899 05:04:35 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:45.899 05:04:35 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:45.899 05:04:35 -- setup/common.sh@18 -- # local node=0 00:05:45.899 05:04:35 -- setup/common.sh@19 -- # local var val 00:05:45.899 05:04:35 -- setup/common.sh@20 -- # local mem_f mem 00:05:45.899 05:04:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:45.899 05:04:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:45.899 05:04:35 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:45.899 05:04:35 -- setup/common.sh@28 -- # mapfile -t mem 00:05:45.899 05:04:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.899 05:04:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6643804 kB' 'MemUsed: 5595304 kB' 'SwapCached: 0 kB' 'Active: 456652 kB' 'Inactive: 2661256 kB' 'Active(anon): 128792 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661256 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'FilePages: 2999604 kB' 'Mapped: 50568 kB' 'AnonPages: 119872 kB' 'Shmem: 10488 kB' 'KernelStack: 6752 kB' 'PageTables: 4424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82556 kB' 'Slab: 182900 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100344 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.899 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.899 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.900 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.900 05:04:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.900 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.900 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.900 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.900 05:04:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.900 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.900 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.900 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.900 05:04:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.900 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.900 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.900 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.900 05:04:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.900 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.900 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.900 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.900 05:04:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.900 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.900 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.900 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.900 05:04:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.900 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.900 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.900 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.900 05:04:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.900 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.900 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.900 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.900 05:04:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.900 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.900 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.900 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.900 05:04:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.900 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.900 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.900 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.900 05:04:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.900 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.900 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.900 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.900 05:04:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.900 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.900 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.900 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.900 05:04:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.900 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.900 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.900 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.900 05:04:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.900 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.900 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.900 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.900 05:04:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.900 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.900 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.900 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.900 05:04:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.900 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.900 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.900 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.900 05:04:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.900 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.900 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.900 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.900 05:04:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.900 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.900 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.900 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.900 05:04:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.900 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.900 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.900 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.900 05:04:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.900 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.900 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.900 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.900 05:04:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.900 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.900 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.900 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.900 05:04:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.900 05:04:35 -- setup/common.sh@32 -- # continue 00:05:45.900 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.900 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.900 05:04:35 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.900 05:04:35 -- setup/common.sh@33 -- # echo 0 00:05:45.900 05:04:35 -- setup/common.sh@33 -- # return 0 00:05:45.900 05:04:35 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:45.900 05:04:35 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:45.900 05:04:35 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:45.900 05:04:35 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:45.900 node0=1024 expecting 1024 00:05:45.900 05:04:35 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:45.900 05:04:35 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:45.900 00:05:45.900 real 0m1.068s 00:05:45.900 user 0m0.517s 00:05:45.900 sys 0m0.462s 00:05:45.900 05:04:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:45.900 05:04:35 -- common/autotest_common.sh@10 -- # set +x 00:05:45.900 ************************************ 00:05:45.900 END TEST default_setup 00:05:45.900 ************************************ 00:05:45.900 05:04:35 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:45.900 05:04:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:45.900 05:04:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:45.900 05:04:35 -- common/autotest_common.sh@10 -- # set +x 00:05:45.900 ************************************ 00:05:45.900 START TEST per_node_1G_alloc 00:05:45.900 ************************************ 00:05:45.900 05:04:35 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:05:45.900 05:04:35 -- setup/hugepages.sh@143 -- # local IFS=, 00:05:45.900 05:04:35 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:45.900 05:04:35 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:45.900 05:04:35 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:45.900 05:04:35 -- setup/hugepages.sh@51 -- # shift 00:05:45.900 05:04:35 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:45.900 05:04:35 -- setup/hugepages.sh@52 -- # local node_ids 00:05:45.900 05:04:35 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:45.900 05:04:35 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:45.900 05:04:35 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:45.900 05:04:35 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:45.900 05:04:35 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:45.900 05:04:35 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:45.900 05:04:35 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:45.900 05:04:35 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:45.900 05:04:35 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:45.900 05:04:35 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:45.900 05:04:35 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:45.900 05:04:35 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:45.900 05:04:35 -- setup/hugepages.sh@73 -- # return 0 00:05:45.900 05:04:35 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:45.900 05:04:35 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:45.900 05:04:35 -- setup/hugepages.sh@146 -- # setup output 00:05:45.900 05:04:35 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:45.900 05:04:35 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:46.158 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:46.158 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:46.158 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:46.421 05:04:35 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:46.421 05:04:35 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:46.421 05:04:35 -- setup/hugepages.sh@89 -- # local node 00:05:46.421 05:04:35 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:46.421 05:04:35 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:46.421 05:04:35 -- setup/hugepages.sh@92 -- # local surp 00:05:46.421 05:04:35 -- setup/hugepages.sh@93 -- # local resv 00:05:46.421 05:04:35 -- setup/hugepages.sh@94 -- # local anon 00:05:46.421 05:04:35 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:46.421 05:04:35 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:46.421 05:04:35 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:46.421 05:04:35 -- setup/common.sh@18 -- # local node= 00:05:46.421 05:04:35 -- setup/common.sh@19 -- # local var val 00:05:46.421 05:04:35 -- setup/common.sh@20 -- # local mem_f mem 00:05:46.421 05:04:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.421 05:04:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.421 05:04:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:46.421 05:04:35 -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.421 05:04:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.421 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.421 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.421 05:04:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7701288 kB' 'MemAvailable: 10494532 kB' 'Buffers: 2684 kB' 'Cached: 2996920 kB' 'SwapCached: 0 kB' 'Active: 456768 kB' 'Inactive: 2661256 kB' 'Active(anon): 128908 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661256 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119984 kB' 'Mapped: 50712 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 182936 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100380 kB' 'KernelStack: 6744 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 320832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:46.421 05:04:35 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.421 05:04:35 -- setup/common.sh@32 -- # continue 00:05:46.421 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.421 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.421 05:04:35 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.421 05:04:35 -- setup/common.sh@32 -- # continue 00:05:46.421 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.421 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.421 05:04:35 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.421 05:04:35 -- setup/common.sh@32 -- # continue 00:05:46.421 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.421 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.421 05:04:35 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.421 05:04:35 -- setup/common.sh@32 -- # continue 00:05:46.421 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.421 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.421 05:04:35 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.421 05:04:35 -- setup/common.sh@32 -- # continue 00:05:46.421 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.421 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.421 05:04:35 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.421 05:04:35 -- setup/common.sh@32 -- # continue 00:05:46.421 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.421 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.421 05:04:35 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.421 05:04:35 -- setup/common.sh@32 -- # continue 00:05:46.421 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.421 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.421 05:04:35 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.421 05:04:35 -- setup/common.sh@32 -- # continue 00:05:46.421 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.421 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.421 05:04:35 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.421 05:04:35 -- setup/common.sh@32 -- # continue 00:05:46.421 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.421 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.421 05:04:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.421 05:04:35 -- setup/common.sh@32 -- # continue 00:05:46.421 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.421 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.421 05:04:35 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.421 05:04:35 -- setup/common.sh@32 -- # continue 00:05:46.421 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.421 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.421 05:04:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.421 05:04:35 -- setup/common.sh@32 -- # continue 00:05:46.421 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.421 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.421 05:04:35 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.421 05:04:35 -- setup/common.sh@32 -- # continue 00:05:46.421 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.421 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.421 05:04:35 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.421 05:04:35 -- setup/common.sh@32 -- # continue 00:05:46.421 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.421 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.421 05:04:35 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.421 05:04:35 -- setup/common.sh@32 -- # continue 00:05:46.421 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.421 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.421 05:04:35 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.421 05:04:35 -- setup/common.sh@32 -- # continue 00:05:46.421 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.421 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.421 05:04:35 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.421 05:04:35 -- setup/common.sh@32 -- # continue 00:05:46.421 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.421 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.421 05:04:35 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.421 05:04:35 -- setup/common.sh@32 -- # continue 00:05:46.421 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.421 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.421 05:04:35 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.421 05:04:35 -- setup/common.sh@32 -- # continue 00:05:46.421 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.421 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.421 05:04:35 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.421 05:04:35 -- setup/common.sh@32 -- # continue 00:05:46.421 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.421 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.421 05:04:35 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.421 05:04:35 -- setup/common.sh@32 -- # continue 00:05:46.421 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.421 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.421 05:04:35 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.421 05:04:35 -- setup/common.sh@32 -- # continue 00:05:46.421 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.421 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.421 05:04:35 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.421 05:04:35 -- setup/common.sh@32 -- # continue 00:05:46.421 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.421 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.421 05:04:35 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.421 05:04:35 -- setup/common.sh@32 -- # continue 00:05:46.421 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.422 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.422 05:04:35 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.422 05:04:35 -- setup/common.sh@32 -- # continue 00:05:46.422 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.422 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.422 05:04:35 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.422 05:04:35 -- setup/common.sh@32 -- # continue 00:05:46.422 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.422 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.422 05:04:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.422 05:04:35 -- setup/common.sh@32 -- # continue 00:05:46.422 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.422 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.422 05:04:35 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.422 05:04:35 -- setup/common.sh@32 -- # continue 00:05:46.422 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.422 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.422 05:04:35 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.422 05:04:35 -- setup/common.sh@32 -- # continue 00:05:46.422 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.422 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.422 05:04:35 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.422 05:04:35 -- setup/common.sh@32 -- # continue 00:05:46.422 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.422 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.422 05:04:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.422 05:04:35 -- setup/common.sh@32 -- # continue 00:05:46.422 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.422 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.422 05:04:35 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.422 05:04:35 -- setup/common.sh@32 -- # continue 00:05:46.422 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.422 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.422 05:04:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.422 05:04:35 -- setup/common.sh@32 -- # continue 00:05:46.422 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.422 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.422 05:04:35 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.422 05:04:35 -- setup/common.sh@32 -- # continue 00:05:46.422 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.422 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.422 05:04:35 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.422 05:04:35 -- setup/common.sh@32 -- # continue 00:05:46.422 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.422 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.422 05:04:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.422 05:04:35 -- setup/common.sh@32 -- # continue 00:05:46.422 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.422 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.422 05:04:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.422 05:04:35 -- setup/common.sh@32 -- # continue 00:05:46.422 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.422 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.422 05:04:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.422 05:04:35 -- setup/common.sh@32 -- # continue 00:05:46.422 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.422 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.422 05:04:35 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.422 05:04:35 -- setup/common.sh@32 -- # continue 00:05:46.422 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.422 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.422 05:04:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.422 05:04:35 -- setup/common.sh@32 -- # continue 00:05:46.422 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.422 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.422 05:04:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.422 05:04:35 -- setup/common.sh@33 -- # echo 0 00:05:46.422 05:04:35 -- setup/common.sh@33 -- # return 0 00:05:46.422 05:04:35 -- setup/hugepages.sh@97 -- # anon=0 00:05:46.422 05:04:35 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:46.422 05:04:35 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:46.422 05:04:35 -- setup/common.sh@18 -- # local node= 00:05:46.422 05:04:35 -- setup/common.sh@19 -- # local var val 00:05:46.422 05:04:35 -- setup/common.sh@20 -- # local mem_f mem 00:05:46.422 05:04:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.422 05:04:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.422 05:04:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:46.422 05:04:35 -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.422 05:04:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.422 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.422 05:04:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7701288 kB' 'MemAvailable: 10494532 kB' 'Buffers: 2684 kB' 'Cached: 2996920 kB' 'SwapCached: 0 kB' 'Active: 456660 kB' 'Inactive: 2661256 kB' 'Active(anon): 128800 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661256 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119952 kB' 'Mapped: 50568 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 182944 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100388 kB' 'KernelStack: 6768 kB' 'PageTables: 4480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 320832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55480 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:46.422 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.422 05:04:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.422 05:04:35 -- setup/common.sh@32 -- # continue 00:05:46.422 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.422 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.422 05:04:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.422 05:04:35 -- setup/common.sh@32 -- # continue 00:05:46.422 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.422 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.422 05:04:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.422 05:04:35 -- setup/common.sh@32 -- # continue 00:05:46.422 05:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.422 05:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.422 05:04:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.422 05:04:35 -- setup/common.sh@32 -- # continue 00:05:46.422 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.422 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.422 05:04:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.422 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.422 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.422 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.422 05:04:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.422 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.422 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.422 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.422 05:04:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.422 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.422 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.422 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.422 05:04:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.422 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.422 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.422 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.422 05:04:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.422 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.422 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.422 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.422 05:04:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.422 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.422 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.422 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.422 05:04:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.422 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.422 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.422 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.422 05:04:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.422 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.422 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.422 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.422 05:04:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.422 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.422 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.422 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.422 05:04:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.422 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.422 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.422 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.422 05:04:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.422 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.422 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.422 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.423 05:04:36 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.423 05:04:36 -- setup/common.sh@33 -- # echo 0 00:05:46.423 05:04:36 -- setup/common.sh@33 -- # return 0 00:05:46.423 05:04:36 -- setup/hugepages.sh@99 -- # surp=0 00:05:46.423 05:04:36 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:46.423 05:04:36 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:46.423 05:04:36 -- setup/common.sh@18 -- # local node= 00:05:46.423 05:04:36 -- setup/common.sh@19 -- # local var val 00:05:46.423 05:04:36 -- setup/common.sh@20 -- # local mem_f mem 00:05:46.423 05:04:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.423 05:04:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.423 05:04:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:46.423 05:04:36 -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.423 05:04:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.423 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 05:04:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7701288 kB' 'MemAvailable: 10494532 kB' 'Buffers: 2684 kB' 'Cached: 2996920 kB' 'SwapCached: 0 kB' 'Active: 456604 kB' 'Inactive: 2661256 kB' 'Active(anon): 128744 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661256 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119856 kB' 'Mapped: 50568 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 182944 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100388 kB' 'KernelStack: 6736 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 320832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55480 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.424 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.425 05:04:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.425 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.425 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.425 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.425 05:04:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.425 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.425 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.425 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.425 05:04:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.425 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.425 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.425 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.425 05:04:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.425 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.425 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.425 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.425 05:04:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.425 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.425 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.425 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.425 05:04:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.425 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.425 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.425 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.425 05:04:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.425 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.425 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.425 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.425 05:04:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.425 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.425 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.425 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.425 05:04:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.425 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.425 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.425 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.425 05:04:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.425 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.425 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.425 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.425 05:04:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.425 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.425 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.425 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.425 05:04:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.425 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.425 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.425 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.425 05:04:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.425 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.425 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.425 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.425 05:04:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.425 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.425 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.425 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.425 05:04:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.425 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.425 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.425 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.425 05:04:36 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.425 05:04:36 -- setup/common.sh@33 -- # echo 0 00:05:46.425 05:04:36 -- setup/common.sh@33 -- # return 0 00:05:46.425 05:04:36 -- setup/hugepages.sh@100 -- # resv=0 00:05:46.425 nr_hugepages=512 00:05:46.425 05:04:36 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:46.425 resv_hugepages=0 00:05:46.425 05:04:36 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:46.425 surplus_hugepages=0 00:05:46.425 05:04:36 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:46.425 anon_hugepages=0 00:05:46.425 05:04:36 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:46.425 05:04:36 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:46.425 05:04:36 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:46.425 05:04:36 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:46.425 05:04:36 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:46.425 05:04:36 -- setup/common.sh@18 -- # local node= 00:05:46.425 05:04:36 -- setup/common.sh@19 -- # local var val 00:05:46.425 05:04:36 -- setup/common.sh@20 -- # local mem_f mem 00:05:46.425 05:04:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.425 05:04:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.425 05:04:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:46.425 05:04:36 -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.425 05:04:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.425 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.425 05:04:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7701036 kB' 'MemAvailable: 10494280 kB' 'Buffers: 2684 kB' 'Cached: 2996920 kB' 'SwapCached: 0 kB' 'Active: 456668 kB' 'Inactive: 2661256 kB' 'Active(anon): 128808 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661256 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119952 kB' 'Mapped: 50568 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 182944 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100388 kB' 'KernelStack: 6768 kB' 'PageTables: 4480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 320832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:46.425 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.425 05:04:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.425 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.425 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.425 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.425 05:04:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.425 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.425 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.425 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.425 05:04:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.425 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.425 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.425 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.425 05:04:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.425 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.425 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.425 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.425 05:04:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.425 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.425 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.425 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.425 05:04:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.425 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.425 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.425 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.425 05:04:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.425 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.425 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.425 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.425 05:04:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.425 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.425 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.425 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.425 05:04:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.425 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.425 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.425 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.425 05:04:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.426 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.427 05:04:36 -- setup/common.sh@33 -- # echo 512 00:05:46.427 05:04:36 -- setup/common.sh@33 -- # return 0 00:05:46.427 05:04:36 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:46.427 05:04:36 -- setup/hugepages.sh@112 -- # get_nodes 00:05:46.427 05:04:36 -- setup/hugepages.sh@27 -- # local node 00:05:46.427 05:04:36 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:46.427 05:04:36 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:46.427 05:04:36 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:46.427 05:04:36 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:46.427 05:04:36 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:46.427 05:04:36 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:46.427 05:04:36 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:46.427 05:04:36 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:46.427 05:04:36 -- setup/common.sh@18 -- # local node=0 00:05:46.427 05:04:36 -- setup/common.sh@19 -- # local var val 00:05:46.427 05:04:36 -- setup/common.sh@20 -- # local mem_f mem 00:05:46.427 05:04:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.427 05:04:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:46.427 05:04:36 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:46.427 05:04:36 -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.427 05:04:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.427 05:04:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7701036 kB' 'MemUsed: 4538072 kB' 'SwapCached: 0 kB' 'Active: 456712 kB' 'Inactive: 2661256 kB' 'Active(anon): 128852 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661256 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'FilePages: 2999604 kB' 'Mapped: 50568 kB' 'AnonPages: 119960 kB' 'Shmem: 10488 kB' 'KernelStack: 6752 kB' 'PageTables: 4424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82556 kB' 'Slab: 182936 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100380 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.427 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.427 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.428 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.428 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.428 05:04:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.428 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.428 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.428 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.428 05:04:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.428 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.428 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.428 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.428 05:04:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.428 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.428 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.428 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.428 05:04:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.428 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.428 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.428 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.428 05:04:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.428 05:04:36 -- setup/common.sh@32 -- # continue 00:05:46.428 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.428 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.428 05:04:36 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.428 05:04:36 -- setup/common.sh@33 -- # echo 0 00:05:46.428 05:04:36 -- setup/common.sh@33 -- # return 0 00:05:46.428 05:04:36 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:46.428 05:04:36 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:46.428 05:04:36 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:46.428 05:04:36 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:46.428 node0=512 expecting 512 00:05:46.428 05:04:36 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:46.428 05:04:36 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:46.428 00:05:46.428 real 0m0.559s 00:05:46.428 user 0m0.282s 00:05:46.428 sys 0m0.316s 00:05:46.428 05:04:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:46.428 05:04:36 -- common/autotest_common.sh@10 -- # set +x 00:05:46.428 ************************************ 00:05:46.428 END TEST per_node_1G_alloc 00:05:46.428 ************************************ 00:05:46.428 05:04:36 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:46.428 05:04:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:46.428 05:04:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:46.428 05:04:36 -- common/autotest_common.sh@10 -- # set +x 00:05:46.428 ************************************ 00:05:46.428 START TEST even_2G_alloc 00:05:46.428 ************************************ 00:05:46.428 05:04:36 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:05:46.428 05:04:36 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:46.428 05:04:36 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:46.428 05:04:36 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:46.428 05:04:36 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:46.428 05:04:36 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:46.428 05:04:36 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:46.428 05:04:36 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:46.428 05:04:36 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:46.428 05:04:36 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:46.428 05:04:36 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:46.428 05:04:36 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:46.428 05:04:36 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:46.428 05:04:36 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:46.428 05:04:36 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:46.428 05:04:36 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:46.428 05:04:36 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:46.428 05:04:36 -- setup/hugepages.sh@83 -- # : 0 00:05:46.428 05:04:36 -- setup/hugepages.sh@84 -- # : 0 00:05:46.428 05:04:36 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:46.428 05:04:36 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:46.428 05:04:36 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:46.428 05:04:36 -- setup/hugepages.sh@153 -- # setup output 00:05:46.428 05:04:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:46.428 05:04:36 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:46.999 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:46.999 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:46.999 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:46.999 05:04:36 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:46.999 05:04:36 -- setup/hugepages.sh@89 -- # local node 00:05:46.999 05:04:36 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:46.999 05:04:36 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:46.999 05:04:36 -- setup/hugepages.sh@92 -- # local surp 00:05:46.999 05:04:36 -- setup/hugepages.sh@93 -- # local resv 00:05:46.999 05:04:36 -- setup/hugepages.sh@94 -- # local anon 00:05:46.999 05:04:36 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:46.999 05:04:36 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:46.999 05:04:36 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:46.999 05:04:36 -- setup/common.sh@18 -- # local node= 00:05:47.000 05:04:36 -- setup/common.sh@19 -- # local var val 00:05:47.000 05:04:36 -- setup/common.sh@20 -- # local mem_f mem 00:05:47.000 05:04:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:47.000 05:04:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:47.000 05:04:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:47.000 05:04:36 -- setup/common.sh@28 -- # mapfile -t mem 00:05:47.000 05:04:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.000 05:04:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6658932 kB' 'MemAvailable: 9452180 kB' 'Buffers: 2684 kB' 'Cached: 2996924 kB' 'SwapCached: 0 kB' 'Active: 457088 kB' 'Inactive: 2661260 kB' 'Active(anon): 129228 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661260 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 120444 kB' 'Mapped: 50748 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 182940 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100384 kB' 'KernelStack: 6800 kB' 'PageTables: 4608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 320832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.000 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.000 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.001 05:04:36 -- setup/common.sh@33 -- # echo 0 00:05:47.001 05:04:36 -- setup/common.sh@33 -- # return 0 00:05:47.001 05:04:36 -- setup/hugepages.sh@97 -- # anon=0 00:05:47.001 05:04:36 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:47.001 05:04:36 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:47.001 05:04:36 -- setup/common.sh@18 -- # local node= 00:05:47.001 05:04:36 -- setup/common.sh@19 -- # local var val 00:05:47.001 05:04:36 -- setup/common.sh@20 -- # local mem_f mem 00:05:47.001 05:04:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:47.001 05:04:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:47.001 05:04:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:47.001 05:04:36 -- setup/common.sh@28 -- # mapfile -t mem 00:05:47.001 05:04:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.001 05:04:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6658932 kB' 'MemAvailable: 9452180 kB' 'Buffers: 2684 kB' 'Cached: 2996924 kB' 'SwapCached: 0 kB' 'Active: 456560 kB' 'Inactive: 2661260 kB' 'Active(anon): 128700 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661260 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 120076 kB' 'Mapped: 50696 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 182944 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100388 kB' 'KernelStack: 6736 kB' 'PageTables: 4392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 320832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55464 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.001 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.001 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.002 05:04:36 -- setup/common.sh@33 -- # echo 0 00:05:47.002 05:04:36 -- setup/common.sh@33 -- # return 0 00:05:47.002 05:04:36 -- setup/hugepages.sh@99 -- # surp=0 00:05:47.002 05:04:36 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:47.002 05:04:36 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:47.002 05:04:36 -- setup/common.sh@18 -- # local node= 00:05:47.002 05:04:36 -- setup/common.sh@19 -- # local var val 00:05:47.002 05:04:36 -- setup/common.sh@20 -- # local mem_f mem 00:05:47.002 05:04:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:47.002 05:04:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:47.002 05:04:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:47.002 05:04:36 -- setup/common.sh@28 -- # mapfile -t mem 00:05:47.002 05:04:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.002 05:04:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6658932 kB' 'MemAvailable: 9452180 kB' 'Buffers: 2684 kB' 'Cached: 2996924 kB' 'SwapCached: 0 kB' 'Active: 456676 kB' 'Inactive: 2661260 kB' 'Active(anon): 128816 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661260 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119892 kB' 'Mapped: 50568 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 182944 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100388 kB' 'KernelStack: 6736 kB' 'PageTables: 4376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 320832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55480 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.002 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.002 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.003 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.003 05:04:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.004 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.004 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.004 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.004 05:04:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.004 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.004 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.004 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.004 05:04:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.004 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.004 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.004 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.004 05:04:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.004 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.004 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.004 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.004 05:04:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.004 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.004 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.004 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.004 05:04:36 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.004 05:04:36 -- setup/common.sh@33 -- # echo 0 00:05:47.004 05:04:36 -- setup/common.sh@33 -- # return 0 00:05:47.004 05:04:36 -- setup/hugepages.sh@100 -- # resv=0 00:05:47.004 nr_hugepages=1024 00:05:47.004 05:04:36 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:47.004 05:04:36 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:47.004 resv_hugepages=0 00:05:47.004 surplus_hugepages=0 00:05:47.004 05:04:36 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:47.004 anon_hugepages=0 00:05:47.004 05:04:36 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:47.004 05:04:36 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:47.004 05:04:36 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:47.004 05:04:36 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:47.004 05:04:36 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:47.004 05:04:36 -- setup/common.sh@18 -- # local node= 00:05:47.004 05:04:36 -- setup/common.sh@19 -- # local var val 00:05:47.004 05:04:36 -- setup/common.sh@20 -- # local mem_f mem 00:05:47.004 05:04:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:47.004 05:04:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:47.004 05:04:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:47.004 05:04:36 -- setup/common.sh@28 -- # mapfile -t mem 00:05:47.004 05:04:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:47.004 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.004 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.004 05:04:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6658932 kB' 'MemAvailable: 9452180 kB' 'Buffers: 2684 kB' 'Cached: 2996924 kB' 'SwapCached: 0 kB' 'Active: 456632 kB' 'Inactive: 2661260 kB' 'Active(anon): 128772 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661260 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119856 kB' 'Mapped: 50568 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 182940 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100384 kB' 'KernelStack: 6720 kB' 'PageTables: 4324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 320832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55480 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:47.004 05:04:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.004 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.004 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.004 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.004 05:04:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.004 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.004 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.004 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.004 05:04:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.004 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.004 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.004 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.004 05:04:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.004 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.004 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.004 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.004 05:04:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.004 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.004 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.004 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.004 05:04:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.004 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.004 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.004 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.004 05:04:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.004 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.004 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.004 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.004 05:04:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.004 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.004 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.004 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.004 05:04:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.004 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.004 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.004 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.004 05:04:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.004 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.004 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.004 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.004 05:04:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.004 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.004 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.004 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.004 05:04:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.004 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.004 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.004 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.004 05:04:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.004 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.004 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.004 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.004 05:04:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.004 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.004 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.004 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.004 05:04:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.004 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.004 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.004 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.004 05:04:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.004 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.004 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.004 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.004 05:04:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.004 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.004 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.004 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.004 05:04:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.004 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.004 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.004 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.004 05:04:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.004 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.005 05:04:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.005 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.005 05:04:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.005 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.005 05:04:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.005 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.005 05:04:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.005 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.005 05:04:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.005 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.005 05:04:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.005 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.005 05:04:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.005 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.005 05:04:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.005 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.005 05:04:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.005 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.005 05:04:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.005 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.005 05:04:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.005 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.005 05:04:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.005 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.005 05:04:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.005 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.005 05:04:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.005 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.005 05:04:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.005 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.005 05:04:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.005 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.005 05:04:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.005 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.005 05:04:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.005 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.005 05:04:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.005 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.005 05:04:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.005 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.005 05:04:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.005 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.005 05:04:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.005 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.005 05:04:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.005 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.005 05:04:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.005 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.005 05:04:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.005 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.005 05:04:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.005 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.005 05:04:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.005 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.005 05:04:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.005 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.005 05:04:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.005 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.005 05:04:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.005 05:04:36 -- setup/common.sh@33 -- # echo 1024 00:05:47.005 05:04:36 -- setup/common.sh@33 -- # return 0 00:05:47.005 05:04:36 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:47.005 05:04:36 -- setup/hugepages.sh@112 -- # get_nodes 00:05:47.005 05:04:36 -- setup/hugepages.sh@27 -- # local node 00:05:47.005 05:04:36 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:47.005 05:04:36 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:47.005 05:04:36 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:47.005 05:04:36 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:47.005 05:04:36 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:47.005 05:04:36 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:47.005 05:04:36 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:47.005 05:04:36 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:47.005 05:04:36 -- setup/common.sh@18 -- # local node=0 00:05:47.005 05:04:36 -- setup/common.sh@19 -- # local var val 00:05:47.005 05:04:36 -- setup/common.sh@20 -- # local mem_f mem 00:05:47.005 05:04:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:47.005 05:04:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:47.005 05:04:36 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:47.005 05:04:36 -- setup/common.sh@28 -- # mapfile -t mem 00:05:47.005 05:04:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.005 05:04:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6659240 kB' 'MemUsed: 5579868 kB' 'SwapCached: 0 kB' 'Active: 456748 kB' 'Inactive: 2661260 kB' 'Active(anon): 128888 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661260 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'FilePages: 2999608 kB' 'Mapped: 50568 kB' 'AnonPages: 119976 kB' 'Shmem: 10488 kB' 'KernelStack: 6768 kB' 'PageTables: 4480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82556 kB' 'Slab: 182940 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100384 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:47.005 05:04:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.005 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.005 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # continue 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.006 05:04:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.006 05:04:36 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.006 05:04:36 -- setup/common.sh@33 -- # echo 0 00:05:47.006 05:04:36 -- setup/common.sh@33 -- # return 0 00:05:47.006 05:04:36 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:47.006 05:04:36 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:47.006 05:04:36 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:47.006 05:04:36 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:47.006 node0=1024 expecting 1024 00:05:47.006 05:04:36 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:47.006 05:04:36 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:47.006 00:05:47.006 real 0m0.564s 00:05:47.006 user 0m0.273s 00:05:47.006 sys 0m0.328s 00:05:47.006 05:04:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:47.006 05:04:36 -- common/autotest_common.sh@10 -- # set +x 00:05:47.006 ************************************ 00:05:47.006 END TEST even_2G_alloc 00:05:47.006 ************************************ 00:05:47.006 05:04:36 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:47.006 05:04:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:47.006 05:04:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:47.006 05:04:36 -- common/autotest_common.sh@10 -- # set +x 00:05:47.006 ************************************ 00:05:47.006 START TEST odd_alloc 00:05:47.006 ************************************ 00:05:47.006 05:04:36 -- common/autotest_common.sh@1114 -- # odd_alloc 00:05:47.007 05:04:36 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:47.007 05:04:36 -- setup/hugepages.sh@49 -- # local size=2098176 00:05:47.007 05:04:36 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:47.007 05:04:36 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:47.007 05:04:36 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:47.007 05:04:36 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:47.007 05:04:36 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:47.007 05:04:36 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:47.007 05:04:36 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:47.007 05:04:36 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:47.007 05:04:36 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:47.007 05:04:36 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:47.007 05:04:36 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:47.266 05:04:36 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:47.266 05:04:36 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:47.266 05:04:36 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:47.266 05:04:36 -- setup/hugepages.sh@83 -- # : 0 00:05:47.266 05:04:36 -- setup/hugepages.sh@84 -- # : 0 00:05:47.266 05:04:36 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:47.266 05:04:36 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:47.266 05:04:36 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:47.266 05:04:36 -- setup/hugepages.sh@160 -- # setup output 00:05:47.266 05:04:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:47.266 05:04:36 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:47.527 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:47.527 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:47.527 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:47.527 05:04:37 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:47.527 05:04:37 -- setup/hugepages.sh@89 -- # local node 00:05:47.527 05:04:37 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:47.527 05:04:37 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:47.527 05:04:37 -- setup/hugepages.sh@92 -- # local surp 00:05:47.527 05:04:37 -- setup/hugepages.sh@93 -- # local resv 00:05:47.527 05:04:37 -- setup/hugepages.sh@94 -- # local anon 00:05:47.527 05:04:37 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:47.527 05:04:37 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:47.527 05:04:37 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:47.527 05:04:37 -- setup/common.sh@18 -- # local node= 00:05:47.527 05:04:37 -- setup/common.sh@19 -- # local var val 00:05:47.527 05:04:37 -- setup/common.sh@20 -- # local mem_f mem 00:05:47.527 05:04:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:47.527 05:04:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:47.527 05:04:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:47.527 05:04:37 -- setup/common.sh@28 -- # mapfile -t mem 00:05:47.527 05:04:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:47.527 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.527 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.527 05:04:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6654788 kB' 'MemAvailable: 9448036 kB' 'Buffers: 2684 kB' 'Cached: 2996924 kB' 'SwapCached: 0 kB' 'Active: 456752 kB' 'Inactive: 2661260 kB' 'Active(anon): 128892 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661260 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119792 kB' 'Mapped: 50676 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 182924 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100368 kB' 'KernelStack: 6792 kB' 'PageTables: 4676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458556 kB' 'Committed_AS: 320464 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55480 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:47.527 05:04:37 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.527 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.527 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.527 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.527 05:04:37 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.527 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.527 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.527 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.527 05:04:37 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.527 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.527 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.527 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.527 05:04:37 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.527 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.527 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.527 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.527 05:04:37 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.527 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.527 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.527 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.527 05:04:37 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.527 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.527 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.527 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.527 05:04:37 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.527 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.527 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.527 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.527 05:04:37 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.527 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.527 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.527 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.527 05:04:37 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.527 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.527 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.527 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.527 05:04:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.527 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.527 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.527 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.527 05:04:37 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.527 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.527 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.527 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.527 05:04:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.527 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.527 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.527 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.527 05:04:37 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.527 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.527 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.527 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.527 05:04:37 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.527 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.527 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.527 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.527 05:04:37 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.527 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.527 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.527 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.528 05:04:37 -- setup/common.sh@33 -- # echo 0 00:05:47.528 05:04:37 -- setup/common.sh@33 -- # return 0 00:05:47.528 05:04:37 -- setup/hugepages.sh@97 -- # anon=0 00:05:47.528 05:04:37 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:47.528 05:04:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:47.528 05:04:37 -- setup/common.sh@18 -- # local node= 00:05:47.528 05:04:37 -- setup/common.sh@19 -- # local var val 00:05:47.528 05:04:37 -- setup/common.sh@20 -- # local mem_f mem 00:05:47.528 05:04:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:47.528 05:04:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:47.528 05:04:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:47.528 05:04:37 -- setup/common.sh@28 -- # mapfile -t mem 00:05:47.528 05:04:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.528 05:04:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6654824 kB' 'MemAvailable: 9448072 kB' 'Buffers: 2684 kB' 'Cached: 2996924 kB' 'SwapCached: 0 kB' 'Active: 456396 kB' 'Inactive: 2661260 kB' 'Active(anon): 128536 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661260 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119972 kB' 'Mapped: 50568 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 182912 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100356 kB' 'KernelStack: 6768 kB' 'PageTables: 4476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458556 kB' 'Committed_AS: 320832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55416 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.528 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.528 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.529 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.529 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.530 05:04:37 -- setup/common.sh@33 -- # echo 0 00:05:47.530 05:04:37 -- setup/common.sh@33 -- # return 0 00:05:47.530 05:04:37 -- setup/hugepages.sh@99 -- # surp=0 00:05:47.530 05:04:37 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:47.530 05:04:37 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:47.530 05:04:37 -- setup/common.sh@18 -- # local node= 00:05:47.530 05:04:37 -- setup/common.sh@19 -- # local var val 00:05:47.530 05:04:37 -- setup/common.sh@20 -- # local mem_f mem 00:05:47.530 05:04:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:47.530 05:04:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:47.530 05:04:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:47.530 05:04:37 -- setup/common.sh@28 -- # mapfile -t mem 00:05:47.530 05:04:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.530 05:04:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6654824 kB' 'MemAvailable: 9448072 kB' 'Buffers: 2684 kB' 'Cached: 2996924 kB' 'SwapCached: 0 kB' 'Active: 456420 kB' 'Inactive: 2661260 kB' 'Active(anon): 128560 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661260 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119712 kB' 'Mapped: 50568 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 182904 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100348 kB' 'KernelStack: 6768 kB' 'PageTables: 4476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458556 kB' 'Committed_AS: 320832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55416 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.530 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.530 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.531 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.531 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.531 05:04:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.531 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.531 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.531 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.531 05:04:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.531 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.531 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.531 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.531 05:04:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.531 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.531 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.531 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.531 05:04:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.531 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.531 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.531 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.531 05:04:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.531 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.531 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.531 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.531 05:04:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.531 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.531 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.531 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.531 05:04:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.531 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.531 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.531 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.531 05:04:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.531 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.531 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.531 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.531 05:04:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.531 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.531 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.531 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.531 05:04:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.531 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.531 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.531 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.531 05:04:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.531 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.531 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.531 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.531 05:04:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.531 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.531 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.531 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.531 05:04:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.531 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.531 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.531 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.531 05:04:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.531 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.531 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.531 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.531 05:04:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.531 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.531 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.531 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.531 05:04:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.531 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.531 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.531 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.531 05:04:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.531 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.531 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.531 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.531 05:04:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.531 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.531 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.531 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.531 05:04:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.531 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.531 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.531 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.531 05:04:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.531 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.531 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.531 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.531 05:04:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.531 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.531 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.531 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.531 05:04:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.531 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.531 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.531 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.531 05:04:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.531 05:04:37 -- setup/common.sh@33 -- # echo 0 00:05:47.531 05:04:37 -- setup/common.sh@33 -- # return 0 00:05:47.531 05:04:37 -- setup/hugepages.sh@100 -- # resv=0 00:05:47.531 nr_hugepages=1025 00:05:47.531 05:04:37 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:47.531 resv_hugepages=0 00:05:47.531 05:04:37 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:47.531 surplus_hugepages=0 00:05:47.531 05:04:37 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:47.531 anon_hugepages=0 00:05:47.531 05:04:37 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:47.531 05:04:37 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:47.531 05:04:37 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:47.531 05:04:37 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:47.531 05:04:37 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:47.793 05:04:37 -- setup/common.sh@18 -- # local node= 00:05:47.793 05:04:37 -- setup/common.sh@19 -- # local var val 00:05:47.793 05:04:37 -- setup/common.sh@20 -- # local mem_f mem 00:05:47.793 05:04:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:47.793 05:04:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:47.793 05:04:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:47.793 05:04:37 -- setup/common.sh@28 -- # mapfile -t mem 00:05:47.793 05:04:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:47.793 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.793 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.793 05:04:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6654824 kB' 'MemAvailable: 9448072 kB' 'Buffers: 2684 kB' 'Cached: 2996924 kB' 'SwapCached: 0 kB' 'Active: 456680 kB' 'Inactive: 2661260 kB' 'Active(anon): 128820 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661260 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119972 kB' 'Mapped: 50568 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 182904 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100348 kB' 'KernelStack: 6768 kB' 'PageTables: 4476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458556 kB' 'Committed_AS: 320832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:47.793 05:04:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.793 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.793 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.793 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.793 05:04:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.793 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.793 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.793 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.793 05:04:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.793 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.793 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.793 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.793 05:04:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.793 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.793 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.793 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.793 05:04:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.793 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.793 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.793 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.793 05:04:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.793 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.793 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.793 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.793 05:04:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.793 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.793 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.793 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.793 05:04:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.793 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.793 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.793 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.793 05:04:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.793 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.793 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.793 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.793 05:04:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.793 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.793 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.793 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.793 05:04:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.793 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.793 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.793 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.793 05:04:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.793 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.793 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.793 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.793 05:04:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.793 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.793 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.793 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.793 05:04:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.794 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.794 05:04:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.794 05:04:37 -- setup/common.sh@33 -- # echo 1025 00:05:47.794 05:04:37 -- setup/common.sh@33 -- # return 0 00:05:47.794 05:04:37 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:47.794 05:04:37 -- setup/hugepages.sh@112 -- # get_nodes 00:05:47.794 05:04:37 -- setup/hugepages.sh@27 -- # local node 00:05:47.794 05:04:37 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:47.794 05:04:37 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:47.794 05:04:37 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:47.794 05:04:37 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:47.794 05:04:37 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:47.794 05:04:37 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:47.794 05:04:37 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:47.794 05:04:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:47.794 05:04:37 -- setup/common.sh@18 -- # local node=0 00:05:47.794 05:04:37 -- setup/common.sh@19 -- # local var val 00:05:47.794 05:04:37 -- setup/common.sh@20 -- # local mem_f mem 00:05:47.794 05:04:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:47.794 05:04:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:47.795 05:04:37 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:47.795 05:04:37 -- setup/common.sh@28 -- # mapfile -t mem 00:05:47.795 05:04:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.795 05:04:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6654824 kB' 'MemUsed: 5584284 kB' 'SwapCached: 0 kB' 'Active: 456676 kB' 'Inactive: 2661260 kB' 'Active(anon): 128816 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661260 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2999608 kB' 'Mapped: 50568 kB' 'AnonPages: 119984 kB' 'Shmem: 10488 kB' 'KernelStack: 6768 kB' 'PageTables: 4476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82556 kB' 'Slab: 182904 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100348 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.795 05:04:37 -- setup/common.sh@32 -- # continue 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.795 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.796 05:04:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.796 05:04:37 -- setup/common.sh@33 -- # echo 0 00:05:47.796 05:04:37 -- setup/common.sh@33 -- # return 0 00:05:47.796 05:04:37 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:47.796 05:04:37 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:47.796 05:04:37 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:47.796 05:04:37 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:47.796 node0=1025 expecting 1025 00:05:47.796 05:04:37 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:47.796 05:04:37 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:47.796 00:05:47.796 real 0m0.584s 00:05:47.796 user 0m0.270s 00:05:47.796 sys 0m0.320s 00:05:47.796 05:04:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:47.796 05:04:37 -- common/autotest_common.sh@10 -- # set +x 00:05:47.796 ************************************ 00:05:47.796 END TEST odd_alloc 00:05:47.796 ************************************ 00:05:47.796 05:04:37 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:47.796 05:04:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:47.796 05:04:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:47.796 05:04:37 -- common/autotest_common.sh@10 -- # set +x 00:05:47.796 ************************************ 00:05:47.796 START TEST custom_alloc 00:05:47.796 ************************************ 00:05:47.796 05:04:37 -- common/autotest_common.sh@1114 -- # custom_alloc 00:05:47.796 05:04:37 -- setup/hugepages.sh@167 -- # local IFS=, 00:05:47.796 05:04:37 -- setup/hugepages.sh@169 -- # local node 00:05:47.796 05:04:37 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:47.796 05:04:37 -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:47.796 05:04:37 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:47.796 05:04:37 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:47.796 05:04:37 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:47.796 05:04:37 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:47.796 05:04:37 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:47.796 05:04:37 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:47.796 05:04:37 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:47.796 05:04:37 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:47.796 05:04:37 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:47.796 05:04:37 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:47.796 05:04:37 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:47.796 05:04:37 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:47.796 05:04:37 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:47.796 05:04:37 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:47.796 05:04:37 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:47.796 05:04:37 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:47.796 05:04:37 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:47.796 05:04:37 -- setup/hugepages.sh@83 -- # : 0 00:05:47.796 05:04:37 -- setup/hugepages.sh@84 -- # : 0 00:05:47.796 05:04:37 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:47.796 05:04:37 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:47.796 05:04:37 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:47.796 05:04:37 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:47.796 05:04:37 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:47.796 05:04:37 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:47.796 05:04:37 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:47.796 05:04:37 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:47.796 05:04:37 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:47.796 05:04:37 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:47.796 05:04:37 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:47.796 05:04:37 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:47.796 05:04:37 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:47.796 05:04:37 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:47.796 05:04:37 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:47.796 05:04:37 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:47.796 05:04:37 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:47.796 05:04:37 -- setup/hugepages.sh@78 -- # return 0 00:05:47.796 05:04:37 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:47.796 05:04:37 -- setup/hugepages.sh@187 -- # setup output 00:05:47.796 05:04:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:47.796 05:04:37 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:48.056 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:48.056 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:48.056 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:48.056 05:04:37 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:48.056 05:04:37 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:48.056 05:04:37 -- setup/hugepages.sh@89 -- # local node 00:05:48.056 05:04:37 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:48.056 05:04:37 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:48.056 05:04:37 -- setup/hugepages.sh@92 -- # local surp 00:05:48.056 05:04:37 -- setup/hugepages.sh@93 -- # local resv 00:05:48.056 05:04:37 -- setup/hugepages.sh@94 -- # local anon 00:05:48.056 05:04:37 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:48.056 05:04:37 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:48.056 05:04:37 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:48.056 05:04:37 -- setup/common.sh@18 -- # local node= 00:05:48.056 05:04:37 -- setup/common.sh@19 -- # local var val 00:05:48.056 05:04:37 -- setup/common.sh@20 -- # local mem_f mem 00:05:48.056 05:04:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:48.056 05:04:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:48.056 05:04:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:48.056 05:04:37 -- setup/common.sh@28 -- # mapfile -t mem 00:05:48.056 05:04:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:48.056 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.056 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.056 05:04:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7705540 kB' 'MemAvailable: 10498788 kB' 'Buffers: 2684 kB' 'Cached: 2996924 kB' 'SwapCached: 0 kB' 'Active: 457336 kB' 'Inactive: 2661260 kB' 'Active(anon): 129476 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661260 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120316 kB' 'Mapped: 50704 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 182872 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100316 kB' 'KernelStack: 6760 kB' 'PageTables: 4560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 320832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:48.056 05:04:37 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.056 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.056 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.056 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.056 05:04:37 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.056 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.056 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.056 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.056 05:04:37 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.056 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.057 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.057 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.057 05:04:37 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.057 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.057 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.057 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.057 05:04:37 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.057 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.057 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.057 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.057 05:04:37 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.057 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.057 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.057 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.057 05:04:37 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.057 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.057 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.057 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.057 05:04:37 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.057 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.057 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.057 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.057 05:04:37 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.057 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.057 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.057 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.057 05:04:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.057 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.057 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.057 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.057 05:04:37 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.057 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.057 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.057 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.057 05:04:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.057 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.057 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.057 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.057 05:04:37 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.057 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.057 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.057 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.057 05:04:37 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.057 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.057 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.057 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.057 05:04:37 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.057 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.057 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.057 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.057 05:04:37 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.057 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.057 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.057 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.057 05:04:37 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.057 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.057 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.057 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.057 05:04:37 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.057 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.057 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.057 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.057 05:04:37 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.057 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.057 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.057 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.057 05:04:37 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.057 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.057 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.057 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.057 05:04:37 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.057 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.057 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.057 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.057 05:04:37 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.057 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.057 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.057 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.057 05:04:37 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.057 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.057 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.057 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.057 05:04:37 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.057 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.057 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.057 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.057 05:04:37 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.057 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.057 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.057 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.057 05:04:37 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.057 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.057 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.057 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.057 05:04:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.057 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.057 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.057 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.057 05:04:37 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.057 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.057 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.057 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.057 05:04:37 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.320 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.320 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.320 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.320 05:04:37 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.320 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.320 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.320 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.320 05:04:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.320 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.320 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.320 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.320 05:04:37 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.320 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.320 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.320 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.320 05:04:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.320 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.320 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.320 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.320 05:04:37 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.320 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.320 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.320 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.320 05:04:37 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.320 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.320 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.320 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.320 05:04:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.320 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.320 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.320 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.320 05:04:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.320 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.320 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.320 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.320 05:04:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.320 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.320 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.320 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.320 05:04:37 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.320 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.320 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.320 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.320 05:04:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.320 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.320 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.320 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.320 05:04:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.320 05:04:37 -- setup/common.sh@33 -- # echo 0 00:05:48.320 05:04:37 -- setup/common.sh@33 -- # return 0 00:05:48.320 05:04:37 -- setup/hugepages.sh@97 -- # anon=0 00:05:48.320 05:04:37 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:48.320 05:04:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:48.320 05:04:37 -- setup/common.sh@18 -- # local node= 00:05:48.320 05:04:37 -- setup/common.sh@19 -- # local var val 00:05:48.320 05:04:37 -- setup/common.sh@20 -- # local mem_f mem 00:05:48.320 05:04:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:48.320 05:04:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:48.320 05:04:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:48.320 05:04:37 -- setup/common.sh@28 -- # mapfile -t mem 00:05:48.320 05:04:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:48.320 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.320 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.320 05:04:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7705792 kB' 'MemAvailable: 10499040 kB' 'Buffers: 2684 kB' 'Cached: 2996924 kB' 'SwapCached: 0 kB' 'Active: 456680 kB' 'Inactive: 2661260 kB' 'Active(anon): 128820 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661260 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119964 kB' 'Mapped: 50756 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 182860 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100304 kB' 'KernelStack: 6744 kB' 'PageTables: 4508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 320832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55464 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:48.320 05:04:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.320 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.320 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.320 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.320 05:04:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.320 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.320 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.320 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.320 05:04:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.320 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.320 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.320 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.320 05:04:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.320 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.320 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.320 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.320 05:04:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.320 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.320 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.320 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.320 05:04:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.320 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.320 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.320 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.320 05:04:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.320 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.320 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.320 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.320 05:04:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.320 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.320 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.320 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.320 05:04:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.320 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.320 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.320 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.320 05:04:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.320 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.320 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.320 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.320 05:04:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.320 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.320 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.320 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.320 05:04:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.320 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.320 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.320 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.320 05:04:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.320 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.320 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.320 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.320 05:04:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.320 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.320 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.320 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.320 05:04:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.320 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.320 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.320 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.320 05:04:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.320 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.320 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.320 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.320 05:04:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.321 05:04:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.321 05:04:37 -- setup/common.sh@33 -- # echo 0 00:05:48.321 05:04:37 -- setup/common.sh@33 -- # return 0 00:05:48.321 05:04:37 -- setup/hugepages.sh@99 -- # surp=0 00:05:48.321 05:04:37 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:48.321 05:04:37 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:48.321 05:04:37 -- setup/common.sh@18 -- # local node= 00:05:48.321 05:04:37 -- setup/common.sh@19 -- # local var val 00:05:48.321 05:04:37 -- setup/common.sh@20 -- # local mem_f mem 00:05:48.321 05:04:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:48.321 05:04:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:48.321 05:04:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:48.321 05:04:37 -- setup/common.sh@28 -- # mapfile -t mem 00:05:48.321 05:04:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.321 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.322 05:04:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7706452 kB' 'MemAvailable: 10499700 kB' 'Buffers: 2684 kB' 'Cached: 2996924 kB' 'SwapCached: 0 kB' 'Active: 456572 kB' 'Inactive: 2661260 kB' 'Active(anon): 128712 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661260 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120152 kB' 'Mapped: 50704 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 182860 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100304 kB' 'KernelStack: 6712 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 320832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55448 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.322 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.322 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.323 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.323 05:04:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.323 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.323 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.323 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.323 05:04:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.323 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.323 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.323 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.323 05:04:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.323 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.323 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.323 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.323 05:04:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.323 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.323 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.323 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.323 05:04:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.323 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.323 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.323 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.323 05:04:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.323 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.323 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.323 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.323 05:04:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.323 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.323 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.323 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.323 05:04:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.323 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.323 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.323 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.323 05:04:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.323 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.323 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.323 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.323 05:04:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.323 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.323 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.323 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.323 05:04:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.323 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.323 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.323 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.323 05:04:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.323 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.323 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.323 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.323 05:04:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.323 05:04:37 -- setup/common.sh@33 -- # echo 0 00:05:48.323 05:04:37 -- setup/common.sh@33 -- # return 0 00:05:48.323 05:04:37 -- setup/hugepages.sh@100 -- # resv=0 00:05:48.323 nr_hugepages=512 00:05:48.323 05:04:37 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:48.323 resv_hugepages=0 00:05:48.323 05:04:37 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:48.323 surplus_hugepages=0 00:05:48.323 05:04:37 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:48.323 anon_hugepages=0 00:05:48.323 05:04:37 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:48.323 05:04:37 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:48.323 05:04:37 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:48.323 05:04:37 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:48.323 05:04:37 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:48.323 05:04:37 -- setup/common.sh@18 -- # local node= 00:05:48.323 05:04:37 -- setup/common.sh@19 -- # local var val 00:05:48.323 05:04:37 -- setup/common.sh@20 -- # local mem_f mem 00:05:48.323 05:04:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:48.323 05:04:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:48.323 05:04:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:48.323 05:04:37 -- setup/common.sh@28 -- # mapfile -t mem 00:05:48.323 05:04:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:48.323 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.323 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.323 05:04:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7706452 kB' 'MemAvailable: 10499700 kB' 'Buffers: 2684 kB' 'Cached: 2996924 kB' 'SwapCached: 0 kB' 'Active: 456388 kB' 'Inactive: 2661260 kB' 'Active(anon): 128528 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661260 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119672 kB' 'Mapped: 50568 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 182868 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100312 kB' 'KernelStack: 6752 kB' 'PageTables: 4416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 320832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55464 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:48.323 05:04:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.323 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.323 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.323 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.323 05:04:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.323 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.323 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.323 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.323 05:04:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.323 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.323 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.323 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.323 05:04:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.323 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.323 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.323 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.323 05:04:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.323 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.323 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.323 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.323 05:04:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.323 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.323 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.323 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.323 05:04:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.323 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.323 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.323 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.323 05:04:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.323 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.323 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.323 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.323 05:04:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.323 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.323 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.323 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.323 05:04:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.323 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.323 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.323 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.323 05:04:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.323 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.323 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.323 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.323 05:04:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.323 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.323 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.323 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.323 05:04:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.323 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.323 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.323 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.323 05:04:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.323 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.323 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.323 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.323 05:04:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.323 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.323 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.323 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.323 05:04:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.323 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.323 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.323 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.323 05:04:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.323 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.323 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.323 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.323 05:04:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.323 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.324 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.324 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.324 05:04:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.324 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.324 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.324 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.324 05:04:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.324 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.324 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.324 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.324 05:04:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.324 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.324 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.324 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.324 05:04:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.324 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.324 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.324 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.324 05:04:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.324 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.324 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.324 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.324 05:04:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.324 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.324 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.324 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.324 05:04:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.324 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.324 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.324 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.324 05:04:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.324 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.324 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.324 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.324 05:04:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.324 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.324 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.324 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.324 05:04:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.324 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.324 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.324 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.324 05:04:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.324 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.324 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.324 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.324 05:04:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.324 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.324 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.324 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.324 05:04:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.324 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.324 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.324 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.324 05:04:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.324 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.324 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.324 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.324 05:04:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.324 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.324 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.324 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.324 05:04:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.324 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.324 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.324 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.324 05:04:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.324 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.324 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.324 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.324 05:04:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.324 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.324 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.324 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.324 05:04:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.324 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.324 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.324 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.324 05:04:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.324 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.324 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.324 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.324 05:04:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.324 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.324 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.324 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.324 05:04:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.324 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.324 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.324 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.324 05:04:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.324 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.324 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.324 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.324 05:04:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.324 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.324 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.324 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.324 05:04:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.324 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.324 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.324 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.324 05:04:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.324 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.324 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.324 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.324 05:04:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.324 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.324 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.324 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.324 05:04:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.324 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.324 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.324 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.324 05:04:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.324 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.324 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.324 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.324 05:04:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.324 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.325 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.325 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.325 05:04:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.325 05:04:37 -- setup/common.sh@33 -- # echo 512 00:05:48.325 05:04:37 -- setup/common.sh@33 -- # return 0 00:05:48.325 05:04:37 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:48.325 05:04:37 -- setup/hugepages.sh@112 -- # get_nodes 00:05:48.325 05:04:37 -- setup/hugepages.sh@27 -- # local node 00:05:48.325 05:04:37 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:48.325 05:04:37 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:48.325 05:04:37 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:48.325 05:04:37 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:48.325 05:04:37 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:48.325 05:04:37 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:48.325 05:04:37 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:48.325 05:04:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:48.325 05:04:37 -- setup/common.sh@18 -- # local node=0 00:05:48.325 05:04:37 -- setup/common.sh@19 -- # local var val 00:05:48.325 05:04:37 -- setup/common.sh@20 -- # local mem_f mem 00:05:48.325 05:04:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:48.325 05:04:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:48.325 05:04:37 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:48.325 05:04:37 -- setup/common.sh@28 -- # mapfile -t mem 00:05:48.325 05:04:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:48.325 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.325 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.325 05:04:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7706452 kB' 'MemUsed: 4532656 kB' 'SwapCached: 0 kB' 'Active: 456388 kB' 'Inactive: 2661260 kB' 'Active(anon): 128528 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661260 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2999608 kB' 'Mapped: 50568 kB' 'AnonPages: 119932 kB' 'Shmem: 10488 kB' 'KernelStack: 6752 kB' 'PageTables: 4416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82556 kB' 'Slab: 182868 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100312 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:48.325 05:04:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.325 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.325 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.325 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.325 05:04:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.325 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.325 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.325 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.325 05:04:37 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.325 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.325 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.325 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.325 05:04:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.325 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.325 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.325 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.325 05:04:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.325 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.325 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.325 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.325 05:04:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.325 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.325 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.325 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.325 05:04:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.325 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.325 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.325 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.325 05:04:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.325 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.325 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.325 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.325 05:04:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.325 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.325 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.325 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.325 05:04:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.325 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.325 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.325 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.325 05:04:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.325 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.325 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.325 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.325 05:04:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.325 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.325 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.325 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.325 05:04:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.325 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.325 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.325 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.325 05:04:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.325 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.325 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.325 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.325 05:04:37 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.325 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.325 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.325 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.325 05:04:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.325 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.325 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.325 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.325 05:04:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.325 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.325 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.325 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.325 05:04:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.325 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.325 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.325 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.325 05:04:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.325 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.325 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.325 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.325 05:04:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.325 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.325 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.325 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.325 05:04:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.325 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.325 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.325 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.325 05:04:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.325 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.325 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.325 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.325 05:04:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.325 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.325 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.325 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.325 05:04:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.325 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.325 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.325 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.325 05:04:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.325 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.325 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.325 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.325 05:04:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.325 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.325 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.325 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.325 05:04:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.325 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.325 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.325 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.325 05:04:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.325 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.325 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.325 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.325 05:04:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.326 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.326 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.326 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.326 05:04:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.326 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.326 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.326 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.326 05:04:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.326 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.326 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.326 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.326 05:04:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.326 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.326 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.326 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.326 05:04:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.326 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.326 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.326 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.326 05:04:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.326 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.326 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.326 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.326 05:04:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.326 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.326 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.326 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.326 05:04:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.326 05:04:37 -- setup/common.sh@32 -- # continue 00:05:48.326 05:04:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.326 05:04:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.326 05:04:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.326 05:04:37 -- setup/common.sh@33 -- # echo 0 00:05:48.326 05:04:37 -- setup/common.sh@33 -- # return 0 00:05:48.326 05:04:37 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:48.326 05:04:37 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:48.326 05:04:37 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:48.326 05:04:37 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:48.326 node0=512 expecting 512 00:05:48.326 05:04:37 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:48.326 05:04:37 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:48.326 00:05:48.326 real 0m0.580s 00:05:48.326 user 0m0.325s 00:05:48.326 sys 0m0.289s 00:05:48.326 05:04:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:48.326 05:04:37 -- common/autotest_common.sh@10 -- # set +x 00:05:48.326 ************************************ 00:05:48.326 END TEST custom_alloc 00:05:48.326 ************************************ 00:05:48.326 05:04:38 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:48.326 05:04:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:48.326 05:04:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:48.326 05:04:38 -- common/autotest_common.sh@10 -- # set +x 00:05:48.326 ************************************ 00:05:48.326 START TEST no_shrink_alloc 00:05:48.326 ************************************ 00:05:48.326 05:04:38 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:05:48.326 05:04:38 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:48.326 05:04:38 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:48.326 05:04:38 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:48.326 05:04:38 -- setup/hugepages.sh@51 -- # shift 00:05:48.326 05:04:38 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:48.326 05:04:38 -- setup/hugepages.sh@52 -- # local node_ids 00:05:48.326 05:04:38 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:48.326 05:04:38 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:48.326 05:04:38 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:48.326 05:04:38 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:48.326 05:04:38 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:48.326 05:04:38 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:48.326 05:04:38 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:48.326 05:04:38 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:48.326 05:04:38 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:48.326 05:04:38 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:48.326 05:04:38 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:48.326 05:04:38 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:48.326 05:04:38 -- setup/hugepages.sh@73 -- # return 0 00:05:48.326 05:04:38 -- setup/hugepages.sh@198 -- # setup output 00:05:48.326 05:04:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:48.326 05:04:38 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:48.899 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:48.899 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:48.899 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:48.899 05:04:38 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:48.899 05:04:38 -- setup/hugepages.sh@89 -- # local node 00:05:48.899 05:04:38 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:48.899 05:04:38 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:48.899 05:04:38 -- setup/hugepages.sh@92 -- # local surp 00:05:48.899 05:04:38 -- setup/hugepages.sh@93 -- # local resv 00:05:48.899 05:04:38 -- setup/hugepages.sh@94 -- # local anon 00:05:48.899 05:04:38 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:48.899 05:04:38 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:48.899 05:04:38 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:48.899 05:04:38 -- setup/common.sh@18 -- # local node= 00:05:48.899 05:04:38 -- setup/common.sh@19 -- # local var val 00:05:48.899 05:04:38 -- setup/common.sh@20 -- # local mem_f mem 00:05:48.899 05:04:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:48.899 05:04:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:48.899 05:04:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:48.899 05:04:38 -- setup/common.sh@28 -- # mapfile -t mem 00:05:48.899 05:04:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:48.899 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.899 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.899 05:04:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6666812 kB' 'MemAvailable: 9460060 kB' 'Buffers: 2684 kB' 'Cached: 2996924 kB' 'SwapCached: 0 kB' 'Active: 456760 kB' 'Inactive: 2661260 kB' 'Active(anon): 128900 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661260 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120004 kB' 'Mapped: 50700 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 182880 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100324 kB' 'KernelStack: 6744 kB' 'PageTables: 4504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 321032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:48.899 05:04:38 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.899 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.899 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.899 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.899 05:04:38 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.899 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.899 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.899 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.899 05:04:38 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.899 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.899 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.899 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.899 05:04:38 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.899 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.899 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.899 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.899 05:04:38 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.899 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.899 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.899 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.899 05:04:38 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.899 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.899 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.899 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.899 05:04:38 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.899 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.899 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.899 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.899 05:04:38 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.899 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.899 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.899 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.899 05:04:38 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.899 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.899 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.899 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.899 05:04:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.899 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.899 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.899 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.899 05:04:38 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.899 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.899 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.900 05:04:38 -- setup/common.sh@33 -- # echo 0 00:05:48.900 05:04:38 -- setup/common.sh@33 -- # return 0 00:05:48.900 05:04:38 -- setup/hugepages.sh@97 -- # anon=0 00:05:48.900 05:04:38 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:48.900 05:04:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:48.900 05:04:38 -- setup/common.sh@18 -- # local node= 00:05:48.900 05:04:38 -- setup/common.sh@19 -- # local var val 00:05:48.900 05:04:38 -- setup/common.sh@20 -- # local mem_f mem 00:05:48.900 05:04:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:48.900 05:04:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:48.900 05:04:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:48.900 05:04:38 -- setup/common.sh@28 -- # mapfile -t mem 00:05:48.900 05:04:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.900 05:04:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6666812 kB' 'MemAvailable: 9460060 kB' 'Buffers: 2684 kB' 'Cached: 2996924 kB' 'SwapCached: 0 kB' 'Active: 456480 kB' 'Inactive: 2661260 kB' 'Active(anon): 128620 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661260 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120008 kB' 'Mapped: 50700 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 182880 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100324 kB' 'KernelStack: 6728 kB' 'PageTables: 4460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 321032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.900 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.900 05:04:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.901 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.901 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.902 05:04:38 -- setup/common.sh@33 -- # echo 0 00:05:48.902 05:04:38 -- setup/common.sh@33 -- # return 0 00:05:48.902 05:04:38 -- setup/hugepages.sh@99 -- # surp=0 00:05:48.902 05:04:38 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:48.902 05:04:38 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:48.902 05:04:38 -- setup/common.sh@18 -- # local node= 00:05:48.902 05:04:38 -- setup/common.sh@19 -- # local var val 00:05:48.902 05:04:38 -- setup/common.sh@20 -- # local mem_f mem 00:05:48.902 05:04:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:48.902 05:04:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:48.902 05:04:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:48.902 05:04:38 -- setup/common.sh@28 -- # mapfile -t mem 00:05:48.902 05:04:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.902 05:04:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6666812 kB' 'MemAvailable: 9460060 kB' 'Buffers: 2684 kB' 'Cached: 2996924 kB' 'SwapCached: 0 kB' 'Active: 456456 kB' 'Inactive: 2661260 kB' 'Active(anon): 128596 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661260 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119700 kB' 'Mapped: 50700 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 182880 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100324 kB' 'KernelStack: 6764 kB' 'PageTables: 4352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 321032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.902 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.902 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.903 05:04:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.903 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.903 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.903 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.903 05:04:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.903 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.903 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.903 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.903 05:04:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.903 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.903 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.903 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.903 05:04:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.903 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.903 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.903 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.903 05:04:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.903 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.903 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.903 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.903 05:04:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.903 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.903 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.903 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.903 05:04:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.903 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.903 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.903 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.903 05:04:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.903 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.903 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.903 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.903 05:04:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.903 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.903 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.903 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.903 05:04:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.903 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.903 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.903 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.903 05:04:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.903 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.903 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.903 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.903 05:04:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.903 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.903 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.903 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.903 05:04:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.903 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.903 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.903 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.903 05:04:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.903 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.903 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.903 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.903 05:04:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.903 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.903 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.903 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.903 05:04:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.903 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.903 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.903 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.903 05:04:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.903 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.903 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.903 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.903 05:04:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.903 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.903 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.903 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.903 05:04:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.903 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.903 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.903 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.903 05:04:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.903 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.903 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.903 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.903 05:04:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.903 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.903 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.903 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.903 05:04:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.903 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.903 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.903 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.903 05:04:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.903 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.903 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.903 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.903 05:04:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.903 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.903 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.903 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.903 05:04:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.903 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.903 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.903 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.903 05:04:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.903 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.903 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.903 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.903 05:04:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.903 05:04:38 -- setup/common.sh@33 -- # echo 0 00:05:48.903 05:04:38 -- setup/common.sh@33 -- # return 0 00:05:48.903 nr_hugepages=1024 00:05:48.903 resv_hugepages=0 00:05:48.903 surplus_hugepages=0 00:05:48.903 anon_hugepages=0 00:05:48.903 05:04:38 -- setup/hugepages.sh@100 -- # resv=0 00:05:48.903 05:04:38 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:48.903 05:04:38 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:48.903 05:04:38 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:48.903 05:04:38 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:48.903 05:04:38 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:48.903 05:04:38 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:48.903 05:04:38 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:48.903 05:04:38 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:48.903 05:04:38 -- setup/common.sh@18 -- # local node= 00:05:48.903 05:04:38 -- setup/common.sh@19 -- # local var val 00:05:48.903 05:04:38 -- setup/common.sh@20 -- # local mem_f mem 00:05:48.903 05:04:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:48.903 05:04:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:48.903 05:04:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:48.903 05:04:38 -- setup/common.sh@28 -- # mapfile -t mem 00:05:48.903 05:04:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:48.903 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.903 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.903 05:04:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6669052 kB' 'MemAvailable: 9462300 kB' 'Buffers: 2684 kB' 'Cached: 2996924 kB' 'SwapCached: 0 kB' 'Active: 456456 kB' 'Inactive: 2661260 kB' 'Active(anon): 128596 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661260 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120008 kB' 'Mapped: 50568 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 182880 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100324 kB' 'KernelStack: 6768 kB' 'PageTables: 4476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 321032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:48.903 05:04:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.903 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.903 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.903 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.903 05:04:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.903 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.903 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.903 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.903 05:04:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.903 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.903 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.903 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.903 05:04:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.903 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.903 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.904 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.904 05:04:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.905 05:04:38 -- setup/common.sh@33 -- # echo 1024 00:05:48.905 05:04:38 -- setup/common.sh@33 -- # return 0 00:05:48.905 05:04:38 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:48.905 05:04:38 -- setup/hugepages.sh@112 -- # get_nodes 00:05:48.905 05:04:38 -- setup/hugepages.sh@27 -- # local node 00:05:48.905 05:04:38 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:48.905 05:04:38 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:48.905 05:04:38 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:48.905 05:04:38 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:48.905 05:04:38 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:48.905 05:04:38 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:48.905 05:04:38 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:48.905 05:04:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:48.905 05:04:38 -- setup/common.sh@18 -- # local node=0 00:05:48.905 05:04:38 -- setup/common.sh@19 -- # local var val 00:05:48.905 05:04:38 -- setup/common.sh@20 -- # local mem_f mem 00:05:48.905 05:04:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:48.905 05:04:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:48.905 05:04:38 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:48.905 05:04:38 -- setup/common.sh@28 -- # mapfile -t mem 00:05:48.905 05:04:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.905 05:04:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6669052 kB' 'MemUsed: 5570056 kB' 'SwapCached: 0 kB' 'Active: 456476 kB' 'Inactive: 2661260 kB' 'Active(anon): 128616 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661260 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2999608 kB' 'Mapped: 50568 kB' 'AnonPages: 120008 kB' 'Shmem: 10488 kB' 'KernelStack: 6768 kB' 'PageTables: 4476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82556 kB' 'Slab: 182880 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100324 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.905 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.905 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.906 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.906 05:04:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.906 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.906 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.906 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.906 05:04:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.906 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.906 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.906 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.906 05:04:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.906 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.906 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.906 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.906 05:04:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.906 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.906 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.906 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.906 05:04:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.906 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.906 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.906 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.906 05:04:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.906 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.906 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.906 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.906 05:04:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.906 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.906 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.906 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.906 05:04:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.906 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.906 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.906 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.906 05:04:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.906 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.906 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.906 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.906 05:04:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.906 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.906 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.906 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.906 05:04:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.906 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.906 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.906 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.906 05:04:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.906 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.906 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.906 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.906 05:04:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.906 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.906 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.906 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.906 05:04:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.906 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.906 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.906 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.906 05:04:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.906 05:04:38 -- setup/common.sh@32 -- # continue 00:05:48.906 05:04:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.906 05:04:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.906 05:04:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.906 05:04:38 -- setup/common.sh@33 -- # echo 0 00:05:48.906 05:04:38 -- setup/common.sh@33 -- # return 0 00:05:48.906 node0=1024 expecting 1024 00:05:48.906 05:04:38 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:48.906 05:04:38 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:48.906 05:04:38 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:48.906 05:04:38 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:48.906 05:04:38 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:48.906 05:04:38 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:48.906 05:04:38 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:48.906 05:04:38 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:48.906 05:04:38 -- setup/hugepages.sh@202 -- # setup output 00:05:48.906 05:04:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:48.906 05:04:38 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:49.477 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:49.477 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:49.477 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:49.477 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:49.477 05:04:39 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:49.477 05:04:39 -- setup/hugepages.sh@89 -- # local node 00:05:49.477 05:04:39 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:49.477 05:04:39 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:49.477 05:04:39 -- setup/hugepages.sh@92 -- # local surp 00:05:49.477 05:04:39 -- setup/hugepages.sh@93 -- # local resv 00:05:49.477 05:04:39 -- setup/hugepages.sh@94 -- # local anon 00:05:49.477 05:04:39 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:49.477 05:04:39 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:49.477 05:04:39 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:49.477 05:04:39 -- setup/common.sh@18 -- # local node= 00:05:49.477 05:04:39 -- setup/common.sh@19 -- # local var val 00:05:49.477 05:04:39 -- setup/common.sh@20 -- # local mem_f mem 00:05:49.477 05:04:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:49.477 05:04:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:49.477 05:04:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:49.477 05:04:39 -- setup/common.sh@28 -- # mapfile -t mem 00:05:49.477 05:04:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:49.477 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.477 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.477 05:04:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6677324 kB' 'MemAvailable: 9470572 kB' 'Buffers: 2684 kB' 'Cached: 2996924 kB' 'SwapCached: 0 kB' 'Active: 457424 kB' 'Inactive: 2661260 kB' 'Active(anon): 129564 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661260 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120480 kB' 'Mapped: 50696 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 182920 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100364 kB' 'KernelStack: 6824 kB' 'PageTables: 4536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 321032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:49.477 05:04:39 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.477 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.477 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.477 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.477 05:04:39 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.477 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.477 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.477 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.477 05:04:39 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.477 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.477 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.477 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.477 05:04:39 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.477 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.477 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.477 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.477 05:04:39 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.477 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.477 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.477 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.477 05:04:39 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.477 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.477 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.477 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.477 05:04:39 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.477 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.477 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.477 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.477 05:04:39 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.477 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.477 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.477 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.477 05:04:39 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.477 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.477 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.477 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.477 05:04:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.477 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.477 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.477 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.477 05:04:39 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.477 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.477 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.477 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.477 05:04:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.477 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.477 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.477 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.477 05:04:39 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.477 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.477 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.477 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.477 05:04:39 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.477 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.477 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.477 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.477 05:04:39 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.477 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.477 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.477 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.478 05:04:39 -- setup/common.sh@33 -- # echo 0 00:05:49.478 05:04:39 -- setup/common.sh@33 -- # return 0 00:05:49.478 05:04:39 -- setup/hugepages.sh@97 -- # anon=0 00:05:49.478 05:04:39 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:49.478 05:04:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:49.478 05:04:39 -- setup/common.sh@18 -- # local node= 00:05:49.478 05:04:39 -- setup/common.sh@19 -- # local var val 00:05:49.478 05:04:39 -- setup/common.sh@20 -- # local mem_f mem 00:05:49.478 05:04:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:49.478 05:04:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:49.478 05:04:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:49.478 05:04:39 -- setup/common.sh@28 -- # mapfile -t mem 00:05:49.478 05:04:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.478 05:04:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6677324 kB' 'MemAvailable: 9470572 kB' 'Buffers: 2684 kB' 'Cached: 2996924 kB' 'SwapCached: 0 kB' 'Active: 456552 kB' 'Inactive: 2661260 kB' 'Active(anon): 128692 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661260 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120088 kB' 'Mapped: 50620 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 182928 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100372 kB' 'KernelStack: 6744 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 321032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55480 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.478 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.478 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.479 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.479 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.480 05:04:39 -- setup/common.sh@33 -- # echo 0 00:05:49.480 05:04:39 -- setup/common.sh@33 -- # return 0 00:05:49.480 05:04:39 -- setup/hugepages.sh@99 -- # surp=0 00:05:49.480 05:04:39 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:49.480 05:04:39 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:49.480 05:04:39 -- setup/common.sh@18 -- # local node= 00:05:49.480 05:04:39 -- setup/common.sh@19 -- # local var val 00:05:49.480 05:04:39 -- setup/common.sh@20 -- # local mem_f mem 00:05:49.480 05:04:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:49.480 05:04:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:49.480 05:04:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:49.480 05:04:39 -- setup/common.sh@28 -- # mapfile -t mem 00:05:49.480 05:04:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.480 05:04:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6677324 kB' 'MemAvailable: 9470572 kB' 'Buffers: 2684 kB' 'Cached: 2996924 kB' 'SwapCached: 0 kB' 'Active: 456488 kB' 'Inactive: 2661260 kB' 'Active(anon): 128628 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661260 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120000 kB' 'Mapped: 50568 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 182940 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100384 kB' 'KernelStack: 6768 kB' 'PageTables: 4480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 321032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55480 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.480 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.480 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.481 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.481 05:04:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.481 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.481 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.481 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.481 05:04:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.481 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.481 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.481 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.481 05:04:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.481 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.481 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.481 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.481 05:04:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.481 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.481 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.481 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.481 05:04:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.481 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.481 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.481 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.481 05:04:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.481 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.481 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.481 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.481 05:04:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.481 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.481 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.481 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.481 05:04:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.481 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.481 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.481 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.481 05:04:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.481 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.481 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.481 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.481 05:04:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.481 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.481 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.481 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.481 05:04:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.481 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.481 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.481 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.481 05:04:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.481 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.481 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.481 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.481 05:04:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.481 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.481 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.481 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.481 05:04:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.481 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.481 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.481 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.481 05:04:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.481 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.481 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.481 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.481 05:04:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.481 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.481 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.481 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.481 05:04:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.481 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.481 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.481 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.481 05:04:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.481 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.481 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.481 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.481 05:04:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.481 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.481 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.481 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.481 05:04:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.481 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.481 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.481 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.481 05:04:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.481 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.481 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.481 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.481 05:04:39 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.481 05:04:39 -- setup/common.sh@33 -- # echo 0 00:05:49.481 05:04:39 -- setup/common.sh@33 -- # return 0 00:05:49.481 05:04:39 -- setup/hugepages.sh@100 -- # resv=0 00:05:49.481 nr_hugepages=1024 00:05:49.481 resv_hugepages=0 00:05:49.481 surplus_hugepages=0 00:05:49.481 anon_hugepages=0 00:05:49.481 05:04:39 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:49.481 05:04:39 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:49.481 05:04:39 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:49.481 05:04:39 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:49.481 05:04:39 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:49.481 05:04:39 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:49.481 05:04:39 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:49.481 05:04:39 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:49.481 05:04:39 -- setup/common.sh@18 -- # local node= 00:05:49.481 05:04:39 -- setup/common.sh@19 -- # local var val 00:05:49.481 05:04:39 -- setup/common.sh@20 -- # local mem_f mem 00:05:49.481 05:04:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:49.481 05:04:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:49.481 05:04:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:49.481 05:04:39 -- setup/common.sh@28 -- # mapfile -t mem 00:05:49.481 05:04:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:49.481 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.481 05:04:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6677324 kB' 'MemAvailable: 9470572 kB' 'Buffers: 2684 kB' 'Cached: 2996924 kB' 'SwapCached: 0 kB' 'Active: 456476 kB' 'Inactive: 2661260 kB' 'Active(anon): 128616 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661260 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119968 kB' 'Mapped: 50568 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 182940 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100384 kB' 'KernelStack: 6752 kB' 'PageTables: 4424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 321032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55480 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:49.481 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.481 05:04:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.481 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.481 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.481 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.481 05:04:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.481 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.481 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.481 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.481 05:04:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.481 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.481 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.481 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.481 05:04:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.481 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.481 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.481 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.481 05:04:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.481 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.481 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.481 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.481 05:04:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.481 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.481 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.481 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.481 05:04:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.481 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.481 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.481 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.481 05:04:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.481 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.481 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.481 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.481 05:04:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.482 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.482 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.483 05:04:39 -- setup/common.sh@33 -- # echo 1024 00:05:49.483 05:04:39 -- setup/common.sh@33 -- # return 0 00:05:49.483 05:04:39 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:49.483 05:04:39 -- setup/hugepages.sh@112 -- # get_nodes 00:05:49.483 05:04:39 -- setup/hugepages.sh@27 -- # local node 00:05:49.483 05:04:39 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:49.483 05:04:39 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:49.483 05:04:39 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:49.483 05:04:39 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:49.483 05:04:39 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:49.483 05:04:39 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:49.483 05:04:39 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:49.483 05:04:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:49.483 05:04:39 -- setup/common.sh@18 -- # local node=0 00:05:49.483 05:04:39 -- setup/common.sh@19 -- # local var val 00:05:49.483 05:04:39 -- setup/common.sh@20 -- # local mem_f mem 00:05:49.483 05:04:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:49.483 05:04:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:49.483 05:04:39 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:49.483 05:04:39 -- setup/common.sh@28 -- # mapfile -t mem 00:05:49.483 05:04:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.483 05:04:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6678884 kB' 'MemUsed: 5560224 kB' 'SwapCached: 0 kB' 'Active: 454400 kB' 'Inactive: 2661260 kB' 'Active(anon): 126540 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661260 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2999608 kB' 'Mapped: 49788 kB' 'AnonPages: 117936 kB' 'Shmem: 10488 kB' 'KernelStack: 6768 kB' 'PageTables: 4480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82556 kB' 'Slab: 182940 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100384 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.483 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.483 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.484 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.484 05:04:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.484 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.484 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.484 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.484 05:04:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.484 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.484 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.484 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.484 05:04:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.484 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.484 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.484 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.484 05:04:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.484 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.484 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.484 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.484 05:04:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.484 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.484 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.484 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.484 05:04:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.484 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.484 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.484 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.484 05:04:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.484 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.484 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.484 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.484 05:04:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.484 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.484 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.484 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.484 05:04:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.484 05:04:39 -- setup/common.sh@32 -- # continue 00:05:49.484 05:04:39 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.484 05:04:39 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.484 05:04:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.484 05:04:39 -- setup/common.sh@33 -- # echo 0 00:05:49.484 05:04:39 -- setup/common.sh@33 -- # return 0 00:05:49.484 node0=1024 expecting 1024 00:05:49.484 ************************************ 00:05:49.484 END TEST no_shrink_alloc 00:05:49.484 ************************************ 00:05:49.484 05:04:39 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:49.484 05:04:39 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:49.484 05:04:39 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:49.484 05:04:39 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:49.484 05:04:39 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:49.484 05:04:39 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:49.484 00:05:49.484 real 0m1.150s 00:05:49.484 user 0m0.562s 00:05:49.484 sys 0m0.612s 00:05:49.484 05:04:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:49.484 05:04:39 -- common/autotest_common.sh@10 -- # set +x 00:05:49.484 05:04:39 -- setup/hugepages.sh@217 -- # clear_hp 00:05:49.484 05:04:39 -- setup/hugepages.sh@37 -- # local node hp 00:05:49.484 05:04:39 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:49.484 05:04:39 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:49.484 05:04:39 -- setup/hugepages.sh@41 -- # echo 0 00:05:49.484 05:04:39 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:49.484 05:04:39 -- setup/hugepages.sh@41 -- # echo 0 00:05:49.484 05:04:39 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:49.484 05:04:39 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:49.484 ************************************ 00:05:49.484 END TEST hugepages 00:05:49.484 ************************************ 00:05:49.484 00:05:49.484 real 0m5.065s 00:05:49.484 user 0m2.495s 00:05:49.484 sys 0m2.589s 00:05:49.484 05:04:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:49.484 05:04:39 -- common/autotest_common.sh@10 -- # set +x 00:05:49.744 05:04:39 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:49.744 05:04:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:49.744 05:04:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:49.744 05:04:39 -- common/autotest_common.sh@10 -- # set +x 00:05:49.744 ************************************ 00:05:49.744 START TEST driver 00:05:49.744 ************************************ 00:05:49.744 05:04:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:49.744 * Looking for test storage... 00:05:49.744 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:49.744 05:04:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:49.744 05:04:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:49.744 05:04:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:49.744 05:04:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:49.744 05:04:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:49.744 05:04:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:49.744 05:04:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:49.744 05:04:39 -- scripts/common.sh@335 -- # IFS=.-: 00:05:49.744 05:04:39 -- scripts/common.sh@335 -- # read -ra ver1 00:05:49.744 05:04:39 -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.744 05:04:39 -- scripts/common.sh@336 -- # read -ra ver2 00:05:49.744 05:04:39 -- scripts/common.sh@337 -- # local 'op=<' 00:05:49.744 05:04:39 -- scripts/common.sh@339 -- # ver1_l=2 00:05:49.744 05:04:39 -- scripts/common.sh@340 -- # ver2_l=1 00:05:49.744 05:04:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:49.744 05:04:39 -- scripts/common.sh@343 -- # case "$op" in 00:05:49.744 05:04:39 -- scripts/common.sh@344 -- # : 1 00:05:49.744 05:04:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:49.744 05:04:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.744 05:04:39 -- scripts/common.sh@364 -- # decimal 1 00:05:49.744 05:04:39 -- scripts/common.sh@352 -- # local d=1 00:05:49.744 05:04:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.744 05:04:39 -- scripts/common.sh@354 -- # echo 1 00:05:49.744 05:04:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:49.744 05:04:39 -- scripts/common.sh@365 -- # decimal 2 00:05:49.744 05:04:39 -- scripts/common.sh@352 -- # local d=2 00:05:49.744 05:04:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.744 05:04:39 -- scripts/common.sh@354 -- # echo 2 00:05:49.744 05:04:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:49.744 05:04:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:49.744 05:04:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:49.744 05:04:39 -- scripts/common.sh@367 -- # return 0 00:05:49.744 05:04:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.744 05:04:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:49.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.744 --rc genhtml_branch_coverage=1 00:05:49.744 --rc genhtml_function_coverage=1 00:05:49.744 --rc genhtml_legend=1 00:05:49.744 --rc geninfo_all_blocks=1 00:05:49.744 --rc geninfo_unexecuted_blocks=1 00:05:49.744 00:05:49.744 ' 00:05:49.744 05:04:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:49.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.744 --rc genhtml_branch_coverage=1 00:05:49.744 --rc genhtml_function_coverage=1 00:05:49.744 --rc genhtml_legend=1 00:05:49.744 --rc geninfo_all_blocks=1 00:05:49.744 --rc geninfo_unexecuted_blocks=1 00:05:49.744 00:05:49.744 ' 00:05:49.744 05:04:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:49.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.744 --rc genhtml_branch_coverage=1 00:05:49.744 --rc genhtml_function_coverage=1 00:05:49.744 --rc genhtml_legend=1 00:05:49.744 --rc geninfo_all_blocks=1 00:05:49.744 --rc geninfo_unexecuted_blocks=1 00:05:49.744 00:05:49.744 ' 00:05:49.744 05:04:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:49.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.744 --rc genhtml_branch_coverage=1 00:05:49.744 --rc genhtml_function_coverage=1 00:05:49.744 --rc genhtml_legend=1 00:05:49.744 --rc geninfo_all_blocks=1 00:05:49.744 --rc geninfo_unexecuted_blocks=1 00:05:49.744 00:05:49.744 ' 00:05:49.744 05:04:39 -- setup/driver.sh@68 -- # setup reset 00:05:49.744 05:04:39 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:49.744 05:04:39 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:50.370 05:04:40 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:50.370 05:04:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:50.370 05:04:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:50.370 05:04:40 -- common/autotest_common.sh@10 -- # set +x 00:05:50.370 ************************************ 00:05:50.370 START TEST guess_driver 00:05:50.370 ************************************ 00:05:50.370 05:04:40 -- common/autotest_common.sh@1114 -- # guess_driver 00:05:50.370 05:04:40 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:50.370 05:04:40 -- setup/driver.sh@47 -- # local fail=0 00:05:50.370 05:04:40 -- setup/driver.sh@49 -- # pick_driver 00:05:50.370 05:04:40 -- setup/driver.sh@36 -- # vfio 00:05:50.370 05:04:40 -- setup/driver.sh@21 -- # local iommu_grups 00:05:50.370 05:04:40 -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:50.370 05:04:40 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:50.370 05:04:40 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:50.370 05:04:40 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:50.370 05:04:40 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:50.370 05:04:40 -- setup/driver.sh@32 -- # return 1 00:05:50.370 05:04:40 -- setup/driver.sh@38 -- # uio 00:05:50.370 05:04:40 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:50.370 05:04:40 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:50.370 05:04:40 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:50.370 05:04:40 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:50.370 05:04:40 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:50.370 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:50.370 05:04:40 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:50.370 Looking for driver=uio_pci_generic 00:05:50.370 05:04:40 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:50.370 05:04:40 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:50.370 05:04:40 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:50.370 05:04:40 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:50.370 05:04:40 -- setup/driver.sh@45 -- # setup output config 00:05:50.370 05:04:40 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:50.370 05:04:40 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:50.937 05:04:40 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:50.937 05:04:40 -- setup/driver.sh@58 -- # continue 00:05:50.937 05:04:40 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:51.195 05:04:40 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:51.195 05:04:40 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:51.195 05:04:40 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:51.195 05:04:40 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:51.195 05:04:40 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:51.195 05:04:40 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:51.195 05:04:40 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:51.195 05:04:40 -- setup/driver.sh@65 -- # setup reset 00:05:51.195 05:04:40 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:51.195 05:04:40 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:52.130 ************************************ 00:05:52.130 END TEST guess_driver 00:05:52.130 ************************************ 00:05:52.130 00:05:52.130 real 0m1.489s 00:05:52.130 user 0m0.576s 00:05:52.130 sys 0m0.912s 00:05:52.130 05:04:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:52.130 05:04:41 -- common/autotest_common.sh@10 -- # set +x 00:05:52.130 ************************************ 00:05:52.130 END TEST driver 00:05:52.130 ************************************ 00:05:52.130 00:05:52.130 real 0m2.295s 00:05:52.130 user 0m0.912s 00:05:52.130 sys 0m1.449s 00:05:52.130 05:04:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:52.130 05:04:41 -- common/autotest_common.sh@10 -- # set +x 00:05:52.130 05:04:41 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:52.130 05:04:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:52.130 05:04:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:52.130 05:04:41 -- common/autotest_common.sh@10 -- # set +x 00:05:52.130 ************************************ 00:05:52.130 START TEST devices 00:05:52.130 ************************************ 00:05:52.130 05:04:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:52.130 * Looking for test storage... 00:05:52.130 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:52.130 05:04:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:52.130 05:04:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:52.130 05:04:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:52.130 05:04:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:52.130 05:04:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:52.130 05:04:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:52.130 05:04:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:52.130 05:04:41 -- scripts/common.sh@335 -- # IFS=.-: 00:05:52.130 05:04:41 -- scripts/common.sh@335 -- # read -ra ver1 00:05:52.130 05:04:41 -- scripts/common.sh@336 -- # IFS=.-: 00:05:52.130 05:04:41 -- scripts/common.sh@336 -- # read -ra ver2 00:05:52.130 05:04:41 -- scripts/common.sh@337 -- # local 'op=<' 00:05:52.130 05:04:41 -- scripts/common.sh@339 -- # ver1_l=2 00:05:52.131 05:04:41 -- scripts/common.sh@340 -- # ver2_l=1 00:05:52.131 05:04:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:52.131 05:04:41 -- scripts/common.sh@343 -- # case "$op" in 00:05:52.131 05:04:41 -- scripts/common.sh@344 -- # : 1 00:05:52.131 05:04:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:52.131 05:04:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:52.131 05:04:41 -- scripts/common.sh@364 -- # decimal 1 00:05:52.131 05:04:41 -- scripts/common.sh@352 -- # local d=1 00:05:52.131 05:04:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:52.131 05:04:41 -- scripts/common.sh@354 -- # echo 1 00:05:52.131 05:04:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:52.131 05:04:41 -- scripts/common.sh@365 -- # decimal 2 00:05:52.131 05:04:41 -- scripts/common.sh@352 -- # local d=2 00:05:52.131 05:04:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:52.131 05:04:41 -- scripts/common.sh@354 -- # echo 2 00:05:52.131 05:04:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:52.131 05:04:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:52.131 05:04:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:52.131 05:04:41 -- scripts/common.sh@367 -- # return 0 00:05:52.131 05:04:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:52.131 05:04:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:52.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.131 --rc genhtml_branch_coverage=1 00:05:52.131 --rc genhtml_function_coverage=1 00:05:52.131 --rc genhtml_legend=1 00:05:52.131 --rc geninfo_all_blocks=1 00:05:52.131 --rc geninfo_unexecuted_blocks=1 00:05:52.131 00:05:52.131 ' 00:05:52.131 05:04:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:52.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.131 --rc genhtml_branch_coverage=1 00:05:52.131 --rc genhtml_function_coverage=1 00:05:52.131 --rc genhtml_legend=1 00:05:52.131 --rc geninfo_all_blocks=1 00:05:52.131 --rc geninfo_unexecuted_blocks=1 00:05:52.131 00:05:52.131 ' 00:05:52.131 05:04:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:52.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.131 --rc genhtml_branch_coverage=1 00:05:52.131 --rc genhtml_function_coverage=1 00:05:52.131 --rc genhtml_legend=1 00:05:52.131 --rc geninfo_all_blocks=1 00:05:52.131 --rc geninfo_unexecuted_blocks=1 00:05:52.131 00:05:52.131 ' 00:05:52.131 05:04:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:52.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.131 --rc genhtml_branch_coverage=1 00:05:52.131 --rc genhtml_function_coverage=1 00:05:52.131 --rc genhtml_legend=1 00:05:52.131 --rc geninfo_all_blocks=1 00:05:52.131 --rc geninfo_unexecuted_blocks=1 00:05:52.131 00:05:52.131 ' 00:05:52.131 05:04:41 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:52.131 05:04:41 -- setup/devices.sh@192 -- # setup reset 00:05:52.131 05:04:41 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:52.131 05:04:41 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:53.066 05:04:42 -- setup/devices.sh@194 -- # get_zoned_devs 00:05:53.066 05:04:42 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:05:53.066 05:04:42 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:05:53.066 05:04:42 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:05:53.066 05:04:42 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:53.066 05:04:42 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:05:53.066 05:04:42 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:05:53.066 05:04:42 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:53.066 05:04:42 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:53.066 05:04:42 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:53.066 05:04:42 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:05:53.066 05:04:42 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:05:53.066 05:04:42 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:53.066 05:04:42 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:53.066 05:04:42 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:53.066 05:04:42 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:05:53.066 05:04:42 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:05:53.066 05:04:42 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:53.066 05:04:42 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:53.066 05:04:42 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:53.066 05:04:42 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:05:53.066 05:04:42 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:05:53.066 05:04:42 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:53.066 05:04:42 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:53.066 05:04:42 -- setup/devices.sh@196 -- # blocks=() 00:05:53.066 05:04:42 -- setup/devices.sh@196 -- # declare -a blocks 00:05:53.066 05:04:42 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:53.066 05:04:42 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:53.066 05:04:42 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:53.066 05:04:42 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:53.066 05:04:42 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:53.066 05:04:42 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:53.066 05:04:42 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:05:53.066 05:04:42 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:53.066 05:04:42 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:53.066 05:04:42 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:05:53.066 05:04:42 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:53.066 No valid GPT data, bailing 00:05:53.066 05:04:42 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:53.066 05:04:42 -- scripts/common.sh@393 -- # pt= 00:05:53.066 05:04:42 -- scripts/common.sh@394 -- # return 1 00:05:53.066 05:04:42 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:53.066 05:04:42 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:53.066 05:04:42 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:53.066 05:04:42 -- setup/common.sh@80 -- # echo 5368709120 00:05:53.066 05:04:42 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:53.066 05:04:42 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:53.066 05:04:42 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:05:53.066 05:04:42 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:53.066 05:04:42 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:53.066 05:04:42 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:53.066 05:04:42 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:53.066 05:04:42 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:53.066 05:04:42 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:53.066 05:04:42 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:05:53.066 05:04:42 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:05:53.066 No valid GPT data, bailing 00:05:53.066 05:04:42 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:53.066 05:04:42 -- scripts/common.sh@393 -- # pt= 00:05:53.066 05:04:42 -- scripts/common.sh@394 -- # return 1 00:05:53.066 05:04:42 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:53.066 05:04:42 -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:53.066 05:04:42 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:53.066 05:04:42 -- setup/common.sh@80 -- # echo 4294967296 00:05:53.066 05:04:42 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:53.066 05:04:42 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:53.066 05:04:42 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:53.066 05:04:42 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:53.066 05:04:42 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:05:53.066 05:04:42 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:53.066 05:04:42 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:53.066 05:04:42 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:53.066 05:04:42 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:05:53.066 05:04:42 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:05:53.066 05:04:42 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:05:53.066 No valid GPT data, bailing 00:05:53.066 05:04:42 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:53.066 05:04:42 -- scripts/common.sh@393 -- # pt= 00:05:53.066 05:04:42 -- scripts/common.sh@394 -- # return 1 00:05:53.066 05:04:42 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:05:53.066 05:04:42 -- setup/common.sh@76 -- # local dev=nvme1n2 00:05:53.066 05:04:42 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:05:53.066 05:04:42 -- setup/common.sh@80 -- # echo 4294967296 00:05:53.066 05:04:42 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:53.066 05:04:42 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:53.066 05:04:42 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:53.066 05:04:42 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:53.066 05:04:42 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:05:53.066 05:04:42 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:53.066 05:04:42 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:53.066 05:04:42 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:53.066 05:04:42 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:05:53.066 05:04:42 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:05:53.066 05:04:42 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:05:53.323 No valid GPT data, bailing 00:05:53.323 05:04:42 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:53.323 05:04:42 -- scripts/common.sh@393 -- # pt= 00:05:53.323 05:04:42 -- scripts/common.sh@394 -- # return 1 00:05:53.323 05:04:42 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:05:53.323 05:04:42 -- setup/common.sh@76 -- # local dev=nvme1n3 00:05:53.323 05:04:42 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:05:53.323 05:04:42 -- setup/common.sh@80 -- # echo 4294967296 00:05:53.323 05:04:42 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:53.323 05:04:42 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:53.323 05:04:42 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:53.323 05:04:42 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:05:53.323 05:04:42 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:53.323 05:04:42 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:53.323 05:04:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:53.323 05:04:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:53.323 05:04:42 -- common/autotest_common.sh@10 -- # set +x 00:05:53.323 ************************************ 00:05:53.323 START TEST nvme_mount 00:05:53.323 ************************************ 00:05:53.323 05:04:42 -- common/autotest_common.sh@1114 -- # nvme_mount 00:05:53.323 05:04:42 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:53.323 05:04:42 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:53.323 05:04:42 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:53.323 05:04:42 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:53.323 05:04:42 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:53.323 05:04:42 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:53.323 05:04:42 -- setup/common.sh@40 -- # local part_no=1 00:05:53.323 05:04:42 -- setup/common.sh@41 -- # local size=1073741824 00:05:53.323 05:04:42 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:53.323 05:04:42 -- setup/common.sh@44 -- # parts=() 00:05:53.323 05:04:42 -- setup/common.sh@44 -- # local parts 00:05:53.323 05:04:42 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:53.323 05:04:42 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:53.323 05:04:42 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:53.323 05:04:42 -- setup/common.sh@46 -- # (( part++ )) 00:05:53.323 05:04:42 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:53.323 05:04:42 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:53.323 05:04:42 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:53.323 05:04:42 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:54.258 Creating new GPT entries in memory. 00:05:54.258 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:54.258 other utilities. 00:05:54.258 05:04:43 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:54.258 05:04:43 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:54.258 05:04:43 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:54.258 05:04:43 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:54.258 05:04:43 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:55.633 Creating new GPT entries in memory. 00:05:55.633 The operation has completed successfully. 00:05:55.633 05:04:44 -- setup/common.sh@57 -- # (( part++ )) 00:05:55.633 05:04:44 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:55.633 05:04:44 -- setup/common.sh@62 -- # wait 64174 00:05:55.633 05:04:45 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:55.633 05:04:45 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:55.633 05:04:45 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:55.633 05:04:45 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:55.633 05:04:45 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:55.633 05:04:45 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:55.633 05:04:45 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:55.633 05:04:45 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:55.633 05:04:45 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:55.633 05:04:45 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:55.633 05:04:45 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:55.633 05:04:45 -- setup/devices.sh@53 -- # local found=0 00:05:55.633 05:04:45 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:55.633 05:04:45 -- setup/devices.sh@56 -- # : 00:05:55.633 05:04:45 -- setup/devices.sh@59 -- # local pci status 00:05:55.633 05:04:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.633 05:04:45 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:55.633 05:04:45 -- setup/devices.sh@47 -- # setup output config 00:05:55.633 05:04:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:55.633 05:04:45 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:55.633 05:04:45 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:55.633 05:04:45 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:55.633 05:04:45 -- setup/devices.sh@63 -- # found=1 00:05:55.633 05:04:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.633 05:04:45 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:55.633 05:04:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.891 05:04:45 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:55.891 05:04:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.891 05:04:45 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:55.891 05:04:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:56.149 05:04:45 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:56.149 05:04:45 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:56.149 05:04:45 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:56.149 05:04:45 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:56.149 05:04:45 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:56.149 05:04:45 -- setup/devices.sh@110 -- # cleanup_nvme 00:05:56.149 05:04:45 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:56.149 05:04:45 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:56.149 05:04:45 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:56.149 05:04:45 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:56.149 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:56.149 05:04:45 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:56.149 05:04:45 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:56.408 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:56.408 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:56.408 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:56.408 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:56.408 05:04:46 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:56.408 05:04:46 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:56.408 05:04:46 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:56.408 05:04:46 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:56.408 05:04:46 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:56.408 05:04:46 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:56.408 05:04:46 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:56.408 05:04:46 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:56.408 05:04:46 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:56.408 05:04:46 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:56.408 05:04:46 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:56.408 05:04:46 -- setup/devices.sh@53 -- # local found=0 00:05:56.408 05:04:46 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:56.408 05:04:46 -- setup/devices.sh@56 -- # : 00:05:56.408 05:04:46 -- setup/devices.sh@59 -- # local pci status 00:05:56.408 05:04:46 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:56.408 05:04:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:56.408 05:04:46 -- setup/devices.sh@47 -- # setup output config 00:05:56.408 05:04:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:56.408 05:04:46 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:56.666 05:04:46 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:56.666 05:04:46 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:56.666 05:04:46 -- setup/devices.sh@63 -- # found=1 00:05:56.666 05:04:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:56.666 05:04:46 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:56.666 05:04:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:56.925 05:04:46 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:56.925 05:04:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:56.925 05:04:46 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:56.925 05:04:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:57.184 05:04:46 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:57.184 05:04:46 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:57.184 05:04:46 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:57.184 05:04:46 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:57.184 05:04:46 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:57.184 05:04:46 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:57.184 05:04:46 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:05:57.184 05:04:46 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:57.184 05:04:46 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:57.184 05:04:46 -- setup/devices.sh@50 -- # local mount_point= 00:05:57.184 05:04:46 -- setup/devices.sh@51 -- # local test_file= 00:05:57.184 05:04:46 -- setup/devices.sh@53 -- # local found=0 00:05:57.184 05:04:46 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:57.184 05:04:46 -- setup/devices.sh@59 -- # local pci status 00:05:57.184 05:04:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:57.184 05:04:46 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:57.184 05:04:46 -- setup/devices.sh@47 -- # setup output config 00:05:57.184 05:04:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:57.184 05:04:46 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:57.443 05:04:47 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:57.443 05:04:47 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:57.443 05:04:47 -- setup/devices.sh@63 -- # found=1 00:05:57.443 05:04:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:57.443 05:04:47 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:57.443 05:04:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:57.702 05:04:47 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:57.702 05:04:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:57.702 05:04:47 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:57.702 05:04:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:57.961 05:04:47 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:57.961 05:04:47 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:57.961 05:04:47 -- setup/devices.sh@68 -- # return 0 00:05:57.961 05:04:47 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:57.961 05:04:47 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:57.961 05:04:47 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:57.961 05:04:47 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:57.961 05:04:47 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:57.961 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:57.961 ************************************ 00:05:57.961 END TEST nvme_mount 00:05:57.961 ************************************ 00:05:57.961 00:05:57.961 real 0m4.601s 00:05:57.961 user 0m1.086s 00:05:57.961 sys 0m1.204s 00:05:57.961 05:04:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:57.961 05:04:47 -- common/autotest_common.sh@10 -- # set +x 00:05:57.961 05:04:47 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:57.961 05:04:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:57.961 05:04:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:57.961 05:04:47 -- common/autotest_common.sh@10 -- # set +x 00:05:57.961 ************************************ 00:05:57.961 START TEST dm_mount 00:05:57.961 ************************************ 00:05:57.961 05:04:47 -- common/autotest_common.sh@1114 -- # dm_mount 00:05:57.961 05:04:47 -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:57.961 05:04:47 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:57.961 05:04:47 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:57.961 05:04:47 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:57.961 05:04:47 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:57.961 05:04:47 -- setup/common.sh@40 -- # local part_no=2 00:05:57.961 05:04:47 -- setup/common.sh@41 -- # local size=1073741824 00:05:57.961 05:04:47 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:57.961 05:04:47 -- setup/common.sh@44 -- # parts=() 00:05:57.961 05:04:47 -- setup/common.sh@44 -- # local parts 00:05:57.961 05:04:47 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:57.962 05:04:47 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:57.962 05:04:47 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:57.962 05:04:47 -- setup/common.sh@46 -- # (( part++ )) 00:05:57.962 05:04:47 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:57.962 05:04:47 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:57.962 05:04:47 -- setup/common.sh@46 -- # (( part++ )) 00:05:57.962 05:04:47 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:57.962 05:04:47 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:57.962 05:04:47 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:57.962 05:04:47 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:58.896 Creating new GPT entries in memory. 00:05:58.896 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:58.896 other utilities. 00:05:58.896 05:04:48 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:58.896 05:04:48 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:58.896 05:04:48 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:58.896 05:04:48 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:58.896 05:04:48 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:06:00.276 Creating new GPT entries in memory. 00:06:00.276 The operation has completed successfully. 00:06:00.276 05:04:49 -- setup/common.sh@57 -- # (( part++ )) 00:06:00.276 05:04:49 -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:00.276 05:04:49 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:00.276 05:04:49 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:00.276 05:04:49 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:06:01.213 The operation has completed successfully. 00:06:01.213 05:04:50 -- setup/common.sh@57 -- # (( part++ )) 00:06:01.213 05:04:50 -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:01.213 05:04:50 -- setup/common.sh@62 -- # wait 64633 00:06:01.213 05:04:50 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:06:01.213 05:04:50 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:01.213 05:04:50 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:01.213 05:04:50 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:06:01.213 05:04:50 -- setup/devices.sh@160 -- # for t in {1..5} 00:06:01.213 05:04:50 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:01.213 05:04:50 -- setup/devices.sh@161 -- # break 00:06:01.213 05:04:50 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:01.213 05:04:50 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:06:01.213 05:04:50 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:06:01.213 05:04:50 -- setup/devices.sh@166 -- # dm=dm-0 00:06:01.213 05:04:50 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:06:01.213 05:04:50 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:06:01.213 05:04:50 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:01.213 05:04:50 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:06:01.213 05:04:50 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:01.213 05:04:50 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:01.213 05:04:50 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:06:01.213 05:04:50 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:01.213 05:04:50 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:01.213 05:04:50 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:06:01.213 05:04:50 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:06:01.213 05:04:50 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:01.213 05:04:50 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:01.213 05:04:50 -- setup/devices.sh@53 -- # local found=0 00:06:01.213 05:04:50 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:06:01.213 05:04:50 -- setup/devices.sh@56 -- # : 00:06:01.213 05:04:50 -- setup/devices.sh@59 -- # local pci status 00:06:01.213 05:04:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:01.213 05:04:50 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:06:01.213 05:04:50 -- setup/devices.sh@47 -- # setup output config 00:06:01.213 05:04:50 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:01.213 05:04:50 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:01.213 05:04:50 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:06:01.213 05:04:50 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:06:01.213 05:04:50 -- setup/devices.sh@63 -- # found=1 00:06:01.213 05:04:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:01.213 05:04:50 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:06:01.213 05:04:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:01.782 05:04:51 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:06:01.782 05:04:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:01.782 05:04:51 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:06:01.782 05:04:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:01.782 05:04:51 -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:01.782 05:04:51 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:06:01.782 05:04:51 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:01.782 05:04:51 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:06:01.782 05:04:51 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:01.782 05:04:51 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:01.782 05:04:51 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:06:01.782 05:04:51 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:06:01.782 05:04:51 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:06:01.782 05:04:51 -- setup/devices.sh@50 -- # local mount_point= 00:06:01.782 05:04:51 -- setup/devices.sh@51 -- # local test_file= 00:06:01.782 05:04:51 -- setup/devices.sh@53 -- # local found=0 00:06:01.782 05:04:51 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:01.782 05:04:51 -- setup/devices.sh@59 -- # local pci status 00:06:01.782 05:04:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:01.782 05:04:51 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:06:01.782 05:04:51 -- setup/devices.sh@47 -- # setup output config 00:06:01.782 05:04:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:01.782 05:04:51 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:02.040 05:04:51 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:06:02.040 05:04:51 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:06:02.040 05:04:51 -- setup/devices.sh@63 -- # found=1 00:06:02.040 05:04:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:02.040 05:04:51 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:06:02.040 05:04:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:02.299 05:04:52 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:06:02.299 05:04:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:02.299 05:04:52 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:06:02.299 05:04:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:02.557 05:04:52 -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:02.558 05:04:52 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:02.558 05:04:52 -- setup/devices.sh@68 -- # return 0 00:06:02.558 05:04:52 -- setup/devices.sh@187 -- # cleanup_dm 00:06:02.558 05:04:52 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:02.558 05:04:52 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:02.558 05:04:52 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:06:02.558 05:04:52 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:02.558 05:04:52 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:06:02.558 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:02.558 05:04:52 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:02.558 05:04:52 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:06:02.558 00:06:02.558 real 0m4.622s 00:06:02.558 user 0m0.701s 00:06:02.558 sys 0m0.838s 00:06:02.558 05:04:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:02.558 ************************************ 00:06:02.558 END TEST dm_mount 00:06:02.558 ************************************ 00:06:02.558 05:04:52 -- common/autotest_common.sh@10 -- # set +x 00:06:02.558 05:04:52 -- setup/devices.sh@1 -- # cleanup 00:06:02.558 05:04:52 -- setup/devices.sh@11 -- # cleanup_nvme 00:06:02.558 05:04:52 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:02.558 05:04:52 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:02.558 05:04:52 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:02.558 05:04:52 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:02.558 05:04:52 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:02.816 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:06:02.816 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:06:02.816 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:02.816 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:02.816 05:04:52 -- setup/devices.sh@12 -- # cleanup_dm 00:06:02.816 05:04:52 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:02.816 05:04:52 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:02.816 05:04:52 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:02.816 05:04:52 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:02.816 05:04:52 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:06:02.816 05:04:52 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:06:02.816 00:06:02.816 real 0m10.906s 00:06:02.816 user 0m2.552s 00:06:02.816 sys 0m2.668s 00:06:02.816 05:04:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:02.816 05:04:52 -- common/autotest_common.sh@10 -- # set +x 00:06:02.816 ************************************ 00:06:02.816 END TEST devices 00:06:02.816 ************************************ 00:06:02.816 00:06:02.816 real 0m23.146s 00:06:02.816 user 0m8.141s 00:06:02.816 sys 0m9.386s 00:06:02.816 05:04:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:02.816 05:04:52 -- common/autotest_common.sh@10 -- # set +x 00:06:02.816 ************************************ 00:06:02.816 END TEST setup.sh 00:06:02.816 ************************************ 00:06:03.074 05:04:52 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:03.074 Hugepages 00:06:03.074 node hugesize free / total 00:06:03.074 node0 1048576kB 0 / 0 00:06:03.074 node0 2048kB 2048 / 2048 00:06:03.074 00:06:03.074 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:03.332 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:03.332 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:03.332 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:06:03.332 05:04:53 -- spdk/autotest.sh@128 -- # uname -s 00:06:03.332 05:04:53 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:06:03.332 05:04:53 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:06:03.332 05:04:53 -- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:04.265 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:04.265 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:06:04.265 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:06:04.265 05:04:53 -- common/autotest_common.sh@1527 -- # sleep 1 00:06:05.198 05:04:54 -- common/autotest_common.sh@1528 -- # bdfs=() 00:06:05.198 05:04:54 -- common/autotest_common.sh@1528 -- # local bdfs 00:06:05.198 05:04:54 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:06:05.198 05:04:54 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:06:05.198 05:04:54 -- common/autotest_common.sh@1508 -- # bdfs=() 00:06:05.198 05:04:54 -- common/autotest_common.sh@1508 -- # local bdfs 00:06:05.198 05:04:54 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:05.198 05:04:54 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:05.198 05:04:54 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:06:05.455 05:04:54 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:06:05.455 05:04:54 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:06:05.455 05:04:54 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:05.712 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:05.712 Waiting for block devices as requested 00:06:05.712 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:06:05.969 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:06:05.969 05:04:55 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:06:05.969 05:04:55 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:06:05.969 05:04:55 -- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme 00:06:05.969 05:04:55 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:05.969 05:04:55 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:06:05.969 05:04:55 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:06:05.969 05:04:55 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:06:05.969 05:04:55 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:06:05.969 05:04:55 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:06:05.969 05:04:55 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:06:05.969 05:04:55 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:06:05.969 05:04:55 -- common/autotest_common.sh@1540 -- # grep oacs 00:06:05.970 05:04:55 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:05.970 05:04:55 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:06:05.970 05:04:55 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:06:05.970 05:04:55 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:06:05.970 05:04:55 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:06:05.970 05:04:55 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:06:05.970 05:04:55 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:06:05.970 05:04:55 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:06:05.970 05:04:55 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:06:05.970 05:04:55 -- common/autotest_common.sh@1552 -- # continue 00:06:05.970 05:04:55 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:06:05.970 05:04:55 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:06:05.970 05:04:55 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:05.970 05:04:55 -- common/autotest_common.sh@1497 -- # grep 0000:00:07.0/nvme/nvme 00:06:05.970 05:04:55 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:06:05.970 05:04:55 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 ]] 00:06:05.970 05:04:55 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:06:05.970 05:04:55 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme1 00:06:05.970 05:04:55 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme1 00:06:05.970 05:04:55 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme1 ]] 00:06:05.970 05:04:55 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:06:05.970 05:04:55 -- common/autotest_common.sh@1540 -- # grep oacs 00:06:05.970 05:04:55 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:05.970 05:04:55 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:06:05.970 05:04:55 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:06:05.970 05:04:55 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:06:05.970 05:04:55 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:06:05.970 05:04:55 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme1 00:06:05.970 05:04:55 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:06:05.970 05:04:55 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:06:05.970 05:04:55 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:06:05.970 05:04:55 -- common/autotest_common.sh@1552 -- # continue 00:06:05.970 05:04:55 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:06:05.970 05:04:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:05.970 05:04:55 -- common/autotest_common.sh@10 -- # set +x 00:06:05.970 05:04:55 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:06:05.970 05:04:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:05.970 05:04:55 -- common/autotest_common.sh@10 -- # set +x 00:06:05.970 05:04:55 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:06.904 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:06.904 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:06:06.904 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:06:06.904 05:04:56 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:06:06.904 05:04:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:06.904 05:04:56 -- common/autotest_common.sh@10 -- # set +x 00:06:06.904 05:04:56 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:06:06.904 05:04:56 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:06:06.904 05:04:56 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:06:06.904 05:04:56 -- common/autotest_common.sh@1572 -- # bdfs=() 00:06:06.904 05:04:56 -- common/autotest_common.sh@1572 -- # local bdfs 00:06:06.904 05:04:56 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:06:06.904 05:04:56 -- common/autotest_common.sh@1508 -- # bdfs=() 00:06:06.904 05:04:56 -- common/autotest_common.sh@1508 -- # local bdfs 00:06:06.904 05:04:56 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:06.904 05:04:56 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:06.904 05:04:56 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:06:07.163 05:04:56 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:06:07.163 05:04:56 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:06:07.163 05:04:56 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:06:07.163 05:04:56 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:06:07.163 05:04:56 -- common/autotest_common.sh@1575 -- # device=0x0010 00:06:07.163 05:04:56 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:07.163 05:04:56 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:06:07.163 05:04:56 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:06:07.163 05:04:56 -- common/autotest_common.sh@1575 -- # device=0x0010 00:06:07.163 05:04:56 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:07.163 05:04:56 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:06:07.163 05:04:56 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:06:07.163 05:04:56 -- common/autotest_common.sh@1588 -- # return 0 00:06:07.163 05:04:56 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:06:07.163 05:04:56 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:06:07.163 05:04:56 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:06:07.163 05:04:56 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:06:07.163 05:04:56 -- spdk/autotest.sh@160 -- # timing_enter lib 00:06:07.163 05:04:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:07.163 05:04:56 -- common/autotest_common.sh@10 -- # set +x 00:06:07.163 05:04:56 -- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:07.163 05:04:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:07.163 05:04:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:07.163 05:04:56 -- common/autotest_common.sh@10 -- # set +x 00:06:07.163 ************************************ 00:06:07.163 START TEST env 00:06:07.163 ************************************ 00:06:07.163 05:04:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:07.163 * Looking for test storage... 00:06:07.163 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:07.163 05:04:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:07.163 05:04:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:07.163 05:04:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:07.423 05:04:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:07.423 05:04:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:07.423 05:04:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:07.423 05:04:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:07.423 05:04:56 -- scripts/common.sh@335 -- # IFS=.-: 00:06:07.423 05:04:56 -- scripts/common.sh@335 -- # read -ra ver1 00:06:07.423 05:04:56 -- scripts/common.sh@336 -- # IFS=.-: 00:06:07.423 05:04:56 -- scripts/common.sh@336 -- # read -ra ver2 00:06:07.423 05:04:56 -- scripts/common.sh@337 -- # local 'op=<' 00:06:07.423 05:04:56 -- scripts/common.sh@339 -- # ver1_l=2 00:06:07.423 05:04:56 -- scripts/common.sh@340 -- # ver2_l=1 00:06:07.423 05:04:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:07.423 05:04:56 -- scripts/common.sh@343 -- # case "$op" in 00:06:07.423 05:04:56 -- scripts/common.sh@344 -- # : 1 00:06:07.423 05:04:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:07.423 05:04:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:07.423 05:04:56 -- scripts/common.sh@364 -- # decimal 1 00:06:07.423 05:04:56 -- scripts/common.sh@352 -- # local d=1 00:06:07.423 05:04:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:07.423 05:04:56 -- scripts/common.sh@354 -- # echo 1 00:06:07.423 05:04:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:07.423 05:04:56 -- scripts/common.sh@365 -- # decimal 2 00:06:07.423 05:04:56 -- scripts/common.sh@352 -- # local d=2 00:06:07.423 05:04:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:07.423 05:04:56 -- scripts/common.sh@354 -- # echo 2 00:06:07.423 05:04:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:07.423 05:04:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:07.423 05:04:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:07.423 05:04:56 -- scripts/common.sh@367 -- # return 0 00:06:07.423 05:04:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:07.423 05:04:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:07.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.423 --rc genhtml_branch_coverage=1 00:06:07.423 --rc genhtml_function_coverage=1 00:06:07.423 --rc genhtml_legend=1 00:06:07.423 --rc geninfo_all_blocks=1 00:06:07.423 --rc geninfo_unexecuted_blocks=1 00:06:07.423 00:06:07.423 ' 00:06:07.423 05:04:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:07.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.423 --rc genhtml_branch_coverage=1 00:06:07.423 --rc genhtml_function_coverage=1 00:06:07.423 --rc genhtml_legend=1 00:06:07.423 --rc geninfo_all_blocks=1 00:06:07.423 --rc geninfo_unexecuted_blocks=1 00:06:07.423 00:06:07.423 ' 00:06:07.423 05:04:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:07.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.423 --rc genhtml_branch_coverage=1 00:06:07.423 --rc genhtml_function_coverage=1 00:06:07.423 --rc genhtml_legend=1 00:06:07.423 --rc geninfo_all_blocks=1 00:06:07.423 --rc geninfo_unexecuted_blocks=1 00:06:07.423 00:06:07.423 ' 00:06:07.423 05:04:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:07.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.423 --rc genhtml_branch_coverage=1 00:06:07.423 --rc genhtml_function_coverage=1 00:06:07.423 --rc genhtml_legend=1 00:06:07.423 --rc geninfo_all_blocks=1 00:06:07.423 --rc geninfo_unexecuted_blocks=1 00:06:07.423 00:06:07.423 ' 00:06:07.423 05:04:56 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:07.423 05:04:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:07.423 05:04:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:07.423 05:04:56 -- common/autotest_common.sh@10 -- # set +x 00:06:07.423 ************************************ 00:06:07.423 START TEST env_memory 00:06:07.423 ************************************ 00:06:07.423 05:04:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:07.423 00:06:07.423 00:06:07.423 CUnit - A unit testing framework for C - Version 2.1-3 00:06:07.423 http://cunit.sourceforge.net/ 00:06:07.423 00:06:07.423 00:06:07.423 Suite: memory 00:06:07.423 Test: alloc and free memory map ...[2024-12-08 05:04:57.022820] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:07.423 passed 00:06:07.423 Test: mem map translation ...[2024-12-08 05:04:57.055324] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:07.423 [2024-12-08 05:04:57.055365] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:07.423 [2024-12-08 05:04:57.055421] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:07.423 [2024-12-08 05:04:57.055432] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:07.423 passed 00:06:07.423 Test: mem map registration ...[2024-12-08 05:04:57.120762] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:06:07.423 [2024-12-08 05:04:57.120796] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:06:07.423 passed 00:06:07.423 Test: mem map adjacent registrations ...passed 00:06:07.423 00:06:07.423 Run Summary: Type Total Ran Passed Failed Inactive 00:06:07.423 suites 1 1 n/a 0 0 00:06:07.423 tests 4 4 4 0 0 00:06:07.423 asserts 152 152 152 0 n/a 00:06:07.423 00:06:07.423 Elapsed time = 0.218 seconds 00:06:07.683 00:06:07.683 real 0m0.234s 00:06:07.683 user 0m0.217s 00:06:07.683 sys 0m0.013s 00:06:07.683 05:04:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:07.683 05:04:57 -- common/autotest_common.sh@10 -- # set +x 00:06:07.683 ************************************ 00:06:07.683 END TEST env_memory 00:06:07.683 ************************************ 00:06:07.683 05:04:57 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:07.683 05:04:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:07.683 05:04:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:07.683 05:04:57 -- common/autotest_common.sh@10 -- # set +x 00:06:07.683 ************************************ 00:06:07.683 START TEST env_vtophys 00:06:07.683 ************************************ 00:06:07.683 05:04:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:07.683 EAL: lib.eal log level changed from notice to debug 00:06:07.683 EAL: Detected lcore 0 as core 0 on socket 0 00:06:07.683 EAL: Detected lcore 1 as core 0 on socket 0 00:06:07.683 EAL: Detected lcore 2 as core 0 on socket 0 00:06:07.683 EAL: Detected lcore 3 as core 0 on socket 0 00:06:07.683 EAL: Detected lcore 4 as core 0 on socket 0 00:06:07.683 EAL: Detected lcore 5 as core 0 on socket 0 00:06:07.683 EAL: Detected lcore 6 as core 0 on socket 0 00:06:07.683 EAL: Detected lcore 7 as core 0 on socket 0 00:06:07.683 EAL: Detected lcore 8 as core 0 on socket 0 00:06:07.683 EAL: Detected lcore 9 as core 0 on socket 0 00:06:07.683 EAL: Maximum logical cores by configuration: 128 00:06:07.683 EAL: Detected CPU lcores: 10 00:06:07.683 EAL: Detected NUMA nodes: 1 00:06:07.683 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:06:07.683 EAL: Detected shared linkage of DPDK 00:06:07.683 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:06:07.683 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:06:07.683 EAL: Registered [vdev] bus. 00:06:07.683 EAL: bus.vdev log level changed from disabled to notice 00:06:07.683 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:06:07.683 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:06:07.683 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:06:07.683 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:06:07.683 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:06:07.683 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:06:07.683 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:06:07.683 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:06:07.683 EAL: No shared files mode enabled, IPC will be disabled 00:06:07.683 EAL: No shared files mode enabled, IPC is disabled 00:06:07.683 EAL: Selected IOVA mode 'PA' 00:06:07.683 EAL: Probing VFIO support... 00:06:07.683 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:07.683 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:07.683 EAL: Ask a virtual area of 0x2e000 bytes 00:06:07.683 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:07.683 EAL: Setting up physically contiguous memory... 00:06:07.683 EAL: Setting maximum number of open files to 524288 00:06:07.683 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:07.683 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:07.683 EAL: Ask a virtual area of 0x61000 bytes 00:06:07.683 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:07.683 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:07.683 EAL: Ask a virtual area of 0x400000000 bytes 00:06:07.683 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:07.683 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:07.683 EAL: Ask a virtual area of 0x61000 bytes 00:06:07.683 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:07.683 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:07.683 EAL: Ask a virtual area of 0x400000000 bytes 00:06:07.683 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:07.683 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:07.683 EAL: Ask a virtual area of 0x61000 bytes 00:06:07.683 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:07.683 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:07.683 EAL: Ask a virtual area of 0x400000000 bytes 00:06:07.683 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:07.683 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:07.684 EAL: Ask a virtual area of 0x61000 bytes 00:06:07.684 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:07.684 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:07.684 EAL: Ask a virtual area of 0x400000000 bytes 00:06:07.684 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:07.684 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:07.684 EAL: Hugepages will be freed exactly as allocated. 00:06:07.684 EAL: No shared files mode enabled, IPC is disabled 00:06:07.684 EAL: No shared files mode enabled, IPC is disabled 00:06:07.684 EAL: TSC frequency is ~2200000 KHz 00:06:07.684 EAL: Main lcore 0 is ready (tid=7f275da82a00;cpuset=[0]) 00:06:07.684 EAL: Trying to obtain current memory policy. 00:06:07.684 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.684 EAL: Restoring previous memory policy: 0 00:06:07.684 EAL: request: mp_malloc_sync 00:06:07.684 EAL: No shared files mode enabled, IPC is disabled 00:06:07.684 EAL: Heap on socket 0 was expanded by 2MB 00:06:07.684 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:07.684 EAL: No shared files mode enabled, IPC is disabled 00:06:07.684 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:07.684 EAL: Mem event callback 'spdk:(nil)' registered 00:06:07.684 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:07.684 00:06:07.684 00:06:07.684 CUnit - A unit testing framework for C - Version 2.1-3 00:06:07.684 http://cunit.sourceforge.net/ 00:06:07.684 00:06:07.684 00:06:07.684 Suite: components_suite 00:06:07.684 Test: vtophys_malloc_test ...passed 00:06:07.684 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:07.684 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.684 EAL: Restoring previous memory policy: 4 00:06:07.684 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.684 EAL: request: mp_malloc_sync 00:06:07.684 EAL: No shared files mode enabled, IPC is disabled 00:06:07.684 EAL: Heap on socket 0 was expanded by 4MB 00:06:07.684 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.684 EAL: request: mp_malloc_sync 00:06:07.684 EAL: No shared files mode enabled, IPC is disabled 00:06:07.684 EAL: Heap on socket 0 was shrunk by 4MB 00:06:07.684 EAL: Trying to obtain current memory policy. 00:06:07.684 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.684 EAL: Restoring previous memory policy: 4 00:06:07.684 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.684 EAL: request: mp_malloc_sync 00:06:07.684 EAL: No shared files mode enabled, IPC is disabled 00:06:07.684 EAL: Heap on socket 0 was expanded by 6MB 00:06:07.684 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.684 EAL: request: mp_malloc_sync 00:06:07.684 EAL: No shared files mode enabled, IPC is disabled 00:06:07.684 EAL: Heap on socket 0 was shrunk by 6MB 00:06:07.684 EAL: Trying to obtain current memory policy. 00:06:07.684 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.684 EAL: Restoring previous memory policy: 4 00:06:07.684 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.684 EAL: request: mp_malloc_sync 00:06:07.684 EAL: No shared files mode enabled, IPC is disabled 00:06:07.684 EAL: Heap on socket 0 was expanded by 10MB 00:06:07.684 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.684 EAL: request: mp_malloc_sync 00:06:07.684 EAL: No shared files mode enabled, IPC is disabled 00:06:07.684 EAL: Heap on socket 0 was shrunk by 10MB 00:06:07.684 EAL: Trying to obtain current memory policy. 00:06:07.684 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.684 EAL: Restoring previous memory policy: 4 00:06:07.684 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.684 EAL: request: mp_malloc_sync 00:06:07.684 EAL: No shared files mode enabled, IPC is disabled 00:06:07.684 EAL: Heap on socket 0 was expanded by 18MB 00:06:07.684 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.684 EAL: request: mp_malloc_sync 00:06:07.684 EAL: No shared files mode enabled, IPC is disabled 00:06:07.684 EAL: Heap on socket 0 was shrunk by 18MB 00:06:07.684 EAL: Trying to obtain current memory policy. 00:06:07.684 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.684 EAL: Restoring previous memory policy: 4 00:06:07.684 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.684 EAL: request: mp_malloc_sync 00:06:07.684 EAL: No shared files mode enabled, IPC is disabled 00:06:07.684 EAL: Heap on socket 0 was expanded by 34MB 00:06:07.684 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.684 EAL: request: mp_malloc_sync 00:06:07.684 EAL: No shared files mode enabled, IPC is disabled 00:06:07.684 EAL: Heap on socket 0 was shrunk by 34MB 00:06:07.684 EAL: Trying to obtain current memory policy. 00:06:07.684 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.684 EAL: Restoring previous memory policy: 4 00:06:07.684 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.684 EAL: request: mp_malloc_sync 00:06:07.684 EAL: No shared files mode enabled, IPC is disabled 00:06:07.684 EAL: Heap on socket 0 was expanded by 66MB 00:06:07.684 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.684 EAL: request: mp_malloc_sync 00:06:07.684 EAL: No shared files mode enabled, IPC is disabled 00:06:07.684 EAL: Heap on socket 0 was shrunk by 66MB 00:06:07.684 EAL: Trying to obtain current memory policy. 00:06:07.684 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.944 EAL: Restoring previous memory policy: 4 00:06:07.944 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.944 EAL: request: mp_malloc_sync 00:06:07.944 EAL: No shared files mode enabled, IPC is disabled 00:06:07.944 EAL: Heap on socket 0 was expanded by 130MB 00:06:07.944 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.944 EAL: request: mp_malloc_sync 00:06:07.944 EAL: No shared files mode enabled, IPC is disabled 00:06:07.944 EAL: Heap on socket 0 was shrunk by 130MB 00:06:07.944 EAL: Trying to obtain current memory policy. 00:06:07.944 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.944 EAL: Restoring previous memory policy: 4 00:06:07.944 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.944 EAL: request: mp_malloc_sync 00:06:07.944 EAL: No shared files mode enabled, IPC is disabled 00:06:07.944 EAL: Heap on socket 0 was expanded by 258MB 00:06:07.944 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.944 EAL: request: mp_malloc_sync 00:06:07.944 EAL: No shared files mode enabled, IPC is disabled 00:06:07.944 EAL: Heap on socket 0 was shrunk by 258MB 00:06:07.944 EAL: Trying to obtain current memory policy. 00:06:07.944 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.944 EAL: Restoring previous memory policy: 4 00:06:07.944 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.944 EAL: request: mp_malloc_sync 00:06:07.944 EAL: No shared files mode enabled, IPC is disabled 00:06:07.944 EAL: Heap on socket 0 was expanded by 514MB 00:06:08.204 EAL: Calling mem event callback 'spdk:(nil)' 00:06:08.204 EAL: request: mp_malloc_sync 00:06:08.204 EAL: No shared files mode enabled, IPC is disabled 00:06:08.204 EAL: Heap on socket 0 was shrunk by 514MB 00:06:08.204 EAL: Trying to obtain current memory policy. 00:06:08.204 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:08.204 EAL: Restoring previous memory policy: 4 00:06:08.204 EAL: Calling mem event callback 'spdk:(nil)' 00:06:08.204 EAL: request: mp_malloc_sync 00:06:08.204 EAL: No shared files mode enabled, IPC is disabled 00:06:08.204 EAL: Heap on socket 0 was expanded by 1026MB 00:06:08.464 EAL: Calling mem event callback 'spdk:(nil)' 00:06:08.464 passed 00:06:08.464 00:06:08.464 Run Summary: Type Total Ran Passed Failed Inactive 00:06:08.464 suites 1 1 n/a 0 0 00:06:08.464 tests 2 2 2 0 0 00:06:08.464 asserts 5232 5232 5232 0 n/a 00:06:08.464 00:06:08.464 Elapsed time = 0.711 seconds 00:06:08.464 EAL: request: mp_malloc_sync 00:06:08.464 EAL: No shared files mode enabled, IPC is disabled 00:06:08.464 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:08.464 EAL: Calling mem event callback 'spdk:(nil)' 00:06:08.464 EAL: request: mp_malloc_sync 00:06:08.464 EAL: No shared files mode enabled, IPC is disabled 00:06:08.464 EAL: Heap on socket 0 was shrunk by 2MB 00:06:08.464 EAL: No shared files mode enabled, IPC is disabled 00:06:08.464 EAL: No shared files mode enabled, IPC is disabled 00:06:08.464 EAL: No shared files mode enabled, IPC is disabled 00:06:08.464 00:06:08.464 real 0m0.903s 00:06:08.464 user 0m0.460s 00:06:08.464 sys 0m0.314s 00:06:08.464 05:04:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:08.464 05:04:58 -- common/autotest_common.sh@10 -- # set +x 00:06:08.464 ************************************ 00:06:08.464 END TEST env_vtophys 00:06:08.464 ************************************ 00:06:08.464 05:04:58 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:08.464 05:04:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:08.464 05:04:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:08.464 05:04:58 -- common/autotest_common.sh@10 -- # set +x 00:06:08.464 ************************************ 00:06:08.464 START TEST env_pci 00:06:08.464 ************************************ 00:06:08.464 05:04:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:08.464 00:06:08.464 00:06:08.464 CUnit - A unit testing framework for C - Version 2.1-3 00:06:08.464 http://cunit.sourceforge.net/ 00:06:08.464 00:06:08.464 00:06:08.464 Suite: pci 00:06:08.464 Test: pci_hook ...[2024-12-08 05:04:58.225578] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 65766 has claimed it 00:06:08.464 passed 00:06:08.464 00:06:08.464 Run Summary: Type Total Ran Passed Failed Inactive 00:06:08.464 suites 1 1 n/a 0 0 00:06:08.464 tests 1 1 1 0 0 00:06:08.464 asserts 25 25 25 0 n/a 00:06:08.464 00:06:08.464 Elapsed time = 0.002 seconds 00:06:08.464 EAL: Cannot find device (10000:00:01.0) 00:06:08.464 EAL: Failed to attach device on primary process 00:06:08.464 00:06:08.464 real 0m0.021s 00:06:08.464 user 0m0.009s 00:06:08.464 sys 0m0.012s 00:06:08.464 05:04:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:08.464 05:04:58 -- common/autotest_common.sh@10 -- # set +x 00:06:08.464 ************************************ 00:06:08.464 END TEST env_pci 00:06:08.464 ************************************ 00:06:08.723 05:04:58 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:08.723 05:04:58 -- env/env.sh@15 -- # uname 00:06:08.723 05:04:58 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:08.723 05:04:58 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:08.723 05:04:58 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:08.723 05:04:58 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:06:08.723 05:04:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:08.723 05:04:58 -- common/autotest_common.sh@10 -- # set +x 00:06:08.723 ************************************ 00:06:08.723 START TEST env_dpdk_post_init 00:06:08.723 ************************************ 00:06:08.724 05:04:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:08.724 EAL: Detected CPU lcores: 10 00:06:08.724 EAL: Detected NUMA nodes: 1 00:06:08.724 EAL: Detected shared linkage of DPDK 00:06:08.724 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:08.724 EAL: Selected IOVA mode 'PA' 00:06:08.724 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:08.724 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:06:08.724 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:06:08.724 Starting DPDK initialization... 00:06:08.724 Starting SPDK post initialization... 00:06:08.724 SPDK NVMe probe 00:06:08.724 Attaching to 0000:00:06.0 00:06:08.724 Attaching to 0000:00:07.0 00:06:08.724 Attached to 0000:00:06.0 00:06:08.724 Attached to 0000:00:07.0 00:06:08.724 Cleaning up... 00:06:08.724 00:06:08.724 real 0m0.179s 00:06:08.724 user 0m0.041s 00:06:08.724 sys 0m0.039s 00:06:08.724 05:04:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:08.724 05:04:58 -- common/autotest_common.sh@10 -- # set +x 00:06:08.724 ************************************ 00:06:08.724 END TEST env_dpdk_post_init 00:06:08.724 ************************************ 00:06:08.983 05:04:58 -- env/env.sh@26 -- # uname 00:06:08.983 05:04:58 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:08.983 05:04:58 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:08.983 05:04:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:08.983 05:04:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:08.983 05:04:58 -- common/autotest_common.sh@10 -- # set +x 00:06:08.983 ************************************ 00:06:08.983 START TEST env_mem_callbacks 00:06:08.983 ************************************ 00:06:08.983 05:04:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:08.983 EAL: Detected CPU lcores: 10 00:06:08.983 EAL: Detected NUMA nodes: 1 00:06:08.983 EAL: Detected shared linkage of DPDK 00:06:08.983 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:08.983 EAL: Selected IOVA mode 'PA' 00:06:08.983 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:08.983 00:06:08.983 00:06:08.983 CUnit - A unit testing framework for C - Version 2.1-3 00:06:08.983 http://cunit.sourceforge.net/ 00:06:08.983 00:06:08.983 00:06:08.983 Suite: memory 00:06:08.983 Test: test ... 00:06:08.984 register 0x200000200000 2097152 00:06:08.984 malloc 3145728 00:06:08.984 register 0x200000400000 4194304 00:06:08.984 buf 0x200000500000 len 3145728 PASSED 00:06:08.984 malloc 64 00:06:08.984 buf 0x2000004fff40 len 64 PASSED 00:06:08.984 malloc 4194304 00:06:08.984 register 0x200000800000 6291456 00:06:08.984 buf 0x200000a00000 len 4194304 PASSED 00:06:08.984 free 0x200000500000 3145728 00:06:08.984 free 0x2000004fff40 64 00:06:08.984 unregister 0x200000400000 4194304 PASSED 00:06:08.984 free 0x200000a00000 4194304 00:06:08.984 unregister 0x200000800000 6291456 PASSED 00:06:08.984 malloc 8388608 00:06:08.984 register 0x200000400000 10485760 00:06:08.984 buf 0x200000600000 len 8388608 PASSED 00:06:08.984 free 0x200000600000 8388608 00:06:08.984 unregister 0x200000400000 10485760 PASSED 00:06:08.984 passed 00:06:08.984 00:06:08.984 Run Summary: Type Total Ran Passed Failed Inactive 00:06:08.984 suites 1 1 n/a 0 0 00:06:08.984 tests 1 1 1 0 0 00:06:08.984 asserts 15 15 15 0 n/a 00:06:08.984 00:06:08.984 Elapsed time = 0.009 seconds 00:06:08.984 00:06:08.984 real 0m0.146s 00:06:08.984 user 0m0.020s 00:06:08.984 sys 0m0.023s 00:06:08.984 ************************************ 00:06:08.984 END TEST env_mem_callbacks 00:06:08.984 ************************************ 00:06:08.984 05:04:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:08.984 05:04:58 -- common/autotest_common.sh@10 -- # set +x 00:06:08.984 ************************************ 00:06:08.984 END TEST env 00:06:08.984 ************************************ 00:06:08.984 00:06:08.984 real 0m1.952s 00:06:08.984 user 0m0.959s 00:06:08.984 sys 0m0.643s 00:06:08.984 05:04:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:08.984 05:04:58 -- common/autotest_common.sh@10 -- # set +x 00:06:09.265 05:04:58 -- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:09.265 05:04:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:09.265 05:04:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:09.265 05:04:58 -- common/autotest_common.sh@10 -- # set +x 00:06:09.265 ************************************ 00:06:09.265 START TEST rpc 00:06:09.265 ************************************ 00:06:09.265 05:04:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:09.265 * Looking for test storage... 00:06:09.265 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:09.265 05:04:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:09.265 05:04:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:09.265 05:04:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:09.265 05:04:58 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:09.265 05:04:58 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:09.265 05:04:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:09.265 05:04:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:09.265 05:04:58 -- scripts/common.sh@335 -- # IFS=.-: 00:06:09.265 05:04:58 -- scripts/common.sh@335 -- # read -ra ver1 00:06:09.265 05:04:58 -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.265 05:04:58 -- scripts/common.sh@336 -- # read -ra ver2 00:06:09.265 05:04:58 -- scripts/common.sh@337 -- # local 'op=<' 00:06:09.265 05:04:58 -- scripts/common.sh@339 -- # ver1_l=2 00:06:09.265 05:04:58 -- scripts/common.sh@340 -- # ver2_l=1 00:06:09.265 05:04:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:09.265 05:04:58 -- scripts/common.sh@343 -- # case "$op" in 00:06:09.265 05:04:58 -- scripts/common.sh@344 -- # : 1 00:06:09.265 05:04:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:09.265 05:04:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.265 05:04:58 -- scripts/common.sh@364 -- # decimal 1 00:06:09.265 05:04:58 -- scripts/common.sh@352 -- # local d=1 00:06:09.265 05:04:58 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:09.265 05:04:58 -- scripts/common.sh@354 -- # echo 1 00:06:09.265 05:04:58 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:09.265 05:04:58 -- scripts/common.sh@365 -- # decimal 2 00:06:09.265 05:04:58 -- scripts/common.sh@352 -- # local d=2 00:06:09.265 05:04:58 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.265 05:04:58 -- scripts/common.sh@354 -- # echo 2 00:06:09.265 05:04:58 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:09.265 05:04:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:09.265 05:04:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:09.265 05:04:58 -- scripts/common.sh@367 -- # return 0 00:06:09.265 05:04:58 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.265 05:04:58 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:09.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.265 --rc genhtml_branch_coverage=1 00:06:09.265 --rc genhtml_function_coverage=1 00:06:09.265 --rc genhtml_legend=1 00:06:09.265 --rc geninfo_all_blocks=1 00:06:09.265 --rc geninfo_unexecuted_blocks=1 00:06:09.265 00:06:09.265 ' 00:06:09.265 05:04:58 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:09.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.265 --rc genhtml_branch_coverage=1 00:06:09.265 --rc genhtml_function_coverage=1 00:06:09.265 --rc genhtml_legend=1 00:06:09.265 --rc geninfo_all_blocks=1 00:06:09.265 --rc geninfo_unexecuted_blocks=1 00:06:09.265 00:06:09.265 ' 00:06:09.265 05:04:58 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:09.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.265 --rc genhtml_branch_coverage=1 00:06:09.265 --rc genhtml_function_coverage=1 00:06:09.265 --rc genhtml_legend=1 00:06:09.265 --rc geninfo_all_blocks=1 00:06:09.265 --rc geninfo_unexecuted_blocks=1 00:06:09.265 00:06:09.265 ' 00:06:09.265 05:04:58 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:09.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.265 --rc genhtml_branch_coverage=1 00:06:09.265 --rc genhtml_function_coverage=1 00:06:09.265 --rc genhtml_legend=1 00:06:09.265 --rc geninfo_all_blocks=1 00:06:09.265 --rc geninfo_unexecuted_blocks=1 00:06:09.265 00:06:09.265 ' 00:06:09.265 05:04:58 -- rpc/rpc.sh@65 -- # spdk_pid=65888 00:06:09.265 05:04:58 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:09.265 05:04:58 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:09.265 05:04:58 -- rpc/rpc.sh@67 -- # waitforlisten 65888 00:06:09.265 05:04:58 -- common/autotest_common.sh@829 -- # '[' -z 65888 ']' 00:06:09.265 05:04:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.265 05:04:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:09.265 05:04:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.265 05:04:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:09.265 05:04:58 -- common/autotest_common.sh@10 -- # set +x 00:06:09.265 [2024-12-08 05:04:59.048712] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:09.265 [2024-12-08 05:04:59.049027] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65888 ] 00:06:09.524 [2024-12-08 05:04:59.190748] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.524 [2024-12-08 05:04:59.231898] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:09.524 [2024-12-08 05:04:59.232296] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:09.524 [2024-12-08 05:04:59.232478] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 65888' to capture a snapshot of events at runtime. 00:06:09.524 [2024-12-08 05:04:59.232650] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid65888 for offline analysis/debug. 00:06:09.524 [2024-12-08 05:04:59.232869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.461 05:05:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.461 05:05:00 -- common/autotest_common.sh@862 -- # return 0 00:06:10.461 05:05:00 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:10.461 05:05:00 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:10.461 05:05:00 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:10.461 05:05:00 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:10.461 05:05:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:10.461 05:05:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:10.461 05:05:00 -- common/autotest_common.sh@10 -- # set +x 00:06:10.461 ************************************ 00:06:10.461 START TEST rpc_integrity 00:06:10.461 ************************************ 00:06:10.461 05:05:00 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:06:10.461 05:05:00 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:10.461 05:05:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.461 05:05:00 -- common/autotest_common.sh@10 -- # set +x 00:06:10.461 05:05:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.461 05:05:00 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:10.461 05:05:00 -- rpc/rpc.sh@13 -- # jq length 00:06:10.461 05:05:00 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:10.461 05:05:00 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:10.461 05:05:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.461 05:05:00 -- common/autotest_common.sh@10 -- # set +x 00:06:10.461 05:05:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.461 05:05:00 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:10.461 05:05:00 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:10.461 05:05:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.461 05:05:00 -- common/autotest_common.sh@10 -- # set +x 00:06:10.461 05:05:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.461 05:05:00 -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:10.461 { 00:06:10.461 "name": "Malloc0", 00:06:10.461 "aliases": [ 00:06:10.461 "8a9bc35f-2d0d-4d5a-81ef-157e0dca3249" 00:06:10.461 ], 00:06:10.461 "product_name": "Malloc disk", 00:06:10.461 "block_size": 512, 00:06:10.461 "num_blocks": 16384, 00:06:10.461 "uuid": "8a9bc35f-2d0d-4d5a-81ef-157e0dca3249", 00:06:10.461 "assigned_rate_limits": { 00:06:10.461 "rw_ios_per_sec": 0, 00:06:10.461 "rw_mbytes_per_sec": 0, 00:06:10.461 "r_mbytes_per_sec": 0, 00:06:10.461 "w_mbytes_per_sec": 0 00:06:10.461 }, 00:06:10.461 "claimed": false, 00:06:10.461 "zoned": false, 00:06:10.461 "supported_io_types": { 00:06:10.461 "read": true, 00:06:10.461 "write": true, 00:06:10.462 "unmap": true, 00:06:10.462 "write_zeroes": true, 00:06:10.462 "flush": true, 00:06:10.462 "reset": true, 00:06:10.462 "compare": false, 00:06:10.462 "compare_and_write": false, 00:06:10.462 "abort": true, 00:06:10.462 "nvme_admin": false, 00:06:10.462 "nvme_io": false 00:06:10.462 }, 00:06:10.462 "memory_domains": [ 00:06:10.462 { 00:06:10.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:10.462 "dma_device_type": 2 00:06:10.462 } 00:06:10.462 ], 00:06:10.462 "driver_specific": {} 00:06:10.462 } 00:06:10.462 ]' 00:06:10.462 05:05:00 -- rpc/rpc.sh@17 -- # jq length 00:06:10.462 05:05:00 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:10.462 05:05:00 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:10.462 05:05:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.462 05:05:00 -- common/autotest_common.sh@10 -- # set +x 00:06:10.462 [2024-12-08 05:05:00.202899] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:10.462 [2024-12-08 05:05:00.202963] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:10.462 [2024-12-08 05:05:00.202982] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15ba030 00:06:10.462 [2024-12-08 05:05:00.202991] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:10.462 [2024-12-08 05:05:00.204374] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:10.462 [2024-12-08 05:05:00.204407] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:10.462 Passthru0 00:06:10.462 05:05:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.462 05:05:00 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:10.462 05:05:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.462 05:05:00 -- common/autotest_common.sh@10 -- # set +x 00:06:10.462 05:05:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.462 05:05:00 -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:10.462 { 00:06:10.462 "name": "Malloc0", 00:06:10.462 "aliases": [ 00:06:10.462 "8a9bc35f-2d0d-4d5a-81ef-157e0dca3249" 00:06:10.462 ], 00:06:10.462 "product_name": "Malloc disk", 00:06:10.462 "block_size": 512, 00:06:10.462 "num_blocks": 16384, 00:06:10.462 "uuid": "8a9bc35f-2d0d-4d5a-81ef-157e0dca3249", 00:06:10.462 "assigned_rate_limits": { 00:06:10.462 "rw_ios_per_sec": 0, 00:06:10.462 "rw_mbytes_per_sec": 0, 00:06:10.462 "r_mbytes_per_sec": 0, 00:06:10.462 "w_mbytes_per_sec": 0 00:06:10.462 }, 00:06:10.462 "claimed": true, 00:06:10.462 "claim_type": "exclusive_write", 00:06:10.462 "zoned": false, 00:06:10.462 "supported_io_types": { 00:06:10.462 "read": true, 00:06:10.462 "write": true, 00:06:10.462 "unmap": true, 00:06:10.462 "write_zeroes": true, 00:06:10.462 "flush": true, 00:06:10.462 "reset": true, 00:06:10.462 "compare": false, 00:06:10.462 "compare_and_write": false, 00:06:10.462 "abort": true, 00:06:10.462 "nvme_admin": false, 00:06:10.462 "nvme_io": false 00:06:10.462 }, 00:06:10.462 "memory_domains": [ 00:06:10.462 { 00:06:10.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:10.462 "dma_device_type": 2 00:06:10.462 } 00:06:10.462 ], 00:06:10.462 "driver_specific": {} 00:06:10.462 }, 00:06:10.462 { 00:06:10.462 "name": "Passthru0", 00:06:10.462 "aliases": [ 00:06:10.462 "4a767cee-75a1-55fe-abca-6cfe265f119c" 00:06:10.462 ], 00:06:10.462 "product_name": "passthru", 00:06:10.462 "block_size": 512, 00:06:10.462 "num_blocks": 16384, 00:06:10.462 "uuid": "4a767cee-75a1-55fe-abca-6cfe265f119c", 00:06:10.462 "assigned_rate_limits": { 00:06:10.462 "rw_ios_per_sec": 0, 00:06:10.462 "rw_mbytes_per_sec": 0, 00:06:10.462 "r_mbytes_per_sec": 0, 00:06:10.462 "w_mbytes_per_sec": 0 00:06:10.462 }, 00:06:10.462 "claimed": false, 00:06:10.462 "zoned": false, 00:06:10.462 "supported_io_types": { 00:06:10.462 "read": true, 00:06:10.462 "write": true, 00:06:10.462 "unmap": true, 00:06:10.462 "write_zeroes": true, 00:06:10.462 "flush": true, 00:06:10.462 "reset": true, 00:06:10.462 "compare": false, 00:06:10.462 "compare_and_write": false, 00:06:10.462 "abort": true, 00:06:10.462 "nvme_admin": false, 00:06:10.462 "nvme_io": false 00:06:10.462 }, 00:06:10.462 "memory_domains": [ 00:06:10.462 { 00:06:10.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:10.462 "dma_device_type": 2 00:06:10.462 } 00:06:10.462 ], 00:06:10.462 "driver_specific": { 00:06:10.462 "passthru": { 00:06:10.462 "name": "Passthru0", 00:06:10.462 "base_bdev_name": "Malloc0" 00:06:10.462 } 00:06:10.462 } 00:06:10.462 } 00:06:10.462 ]' 00:06:10.462 05:05:00 -- rpc/rpc.sh@21 -- # jq length 00:06:10.721 05:05:00 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:10.721 05:05:00 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:10.721 05:05:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.721 05:05:00 -- common/autotest_common.sh@10 -- # set +x 00:06:10.721 05:05:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.721 05:05:00 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:10.721 05:05:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.721 05:05:00 -- common/autotest_common.sh@10 -- # set +x 00:06:10.721 05:05:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.721 05:05:00 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:10.721 05:05:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.721 05:05:00 -- common/autotest_common.sh@10 -- # set +x 00:06:10.721 05:05:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.721 05:05:00 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:10.721 05:05:00 -- rpc/rpc.sh@26 -- # jq length 00:06:10.721 05:05:00 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:10.721 ************************************ 00:06:10.721 END TEST rpc_integrity 00:06:10.721 ************************************ 00:06:10.721 00:06:10.721 real 0m0.307s 00:06:10.721 user 0m0.202s 00:06:10.721 sys 0m0.039s 00:06:10.721 05:05:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:10.721 05:05:00 -- common/autotest_common.sh@10 -- # set +x 00:06:10.721 05:05:00 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:10.721 05:05:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:10.721 05:05:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:10.721 05:05:00 -- common/autotest_common.sh@10 -- # set +x 00:06:10.721 ************************************ 00:06:10.721 START TEST rpc_plugins 00:06:10.722 ************************************ 00:06:10.722 05:05:00 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:06:10.722 05:05:00 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:10.722 05:05:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.722 05:05:00 -- common/autotest_common.sh@10 -- # set +x 00:06:10.722 05:05:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.722 05:05:00 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:10.722 05:05:00 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:10.722 05:05:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.722 05:05:00 -- common/autotest_common.sh@10 -- # set +x 00:06:10.722 05:05:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.722 05:05:00 -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:10.722 { 00:06:10.722 "name": "Malloc1", 00:06:10.722 "aliases": [ 00:06:10.722 "85736e77-10f8-4da5-8d81-f927898add3c" 00:06:10.722 ], 00:06:10.722 "product_name": "Malloc disk", 00:06:10.722 "block_size": 4096, 00:06:10.722 "num_blocks": 256, 00:06:10.722 "uuid": "85736e77-10f8-4da5-8d81-f927898add3c", 00:06:10.722 "assigned_rate_limits": { 00:06:10.722 "rw_ios_per_sec": 0, 00:06:10.722 "rw_mbytes_per_sec": 0, 00:06:10.722 "r_mbytes_per_sec": 0, 00:06:10.722 "w_mbytes_per_sec": 0 00:06:10.722 }, 00:06:10.722 "claimed": false, 00:06:10.722 "zoned": false, 00:06:10.722 "supported_io_types": { 00:06:10.722 "read": true, 00:06:10.722 "write": true, 00:06:10.722 "unmap": true, 00:06:10.722 "write_zeroes": true, 00:06:10.722 "flush": true, 00:06:10.722 "reset": true, 00:06:10.722 "compare": false, 00:06:10.722 "compare_and_write": false, 00:06:10.722 "abort": true, 00:06:10.722 "nvme_admin": false, 00:06:10.722 "nvme_io": false 00:06:10.722 }, 00:06:10.722 "memory_domains": [ 00:06:10.722 { 00:06:10.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:10.722 "dma_device_type": 2 00:06:10.722 } 00:06:10.722 ], 00:06:10.722 "driver_specific": {} 00:06:10.722 } 00:06:10.722 ]' 00:06:10.722 05:05:00 -- rpc/rpc.sh@32 -- # jq length 00:06:10.722 05:05:00 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:10.722 05:05:00 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:10.722 05:05:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.722 05:05:00 -- common/autotest_common.sh@10 -- # set +x 00:06:10.981 05:05:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.981 05:05:00 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:10.981 05:05:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.981 05:05:00 -- common/autotest_common.sh@10 -- # set +x 00:06:10.981 05:05:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.981 05:05:00 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:10.981 05:05:00 -- rpc/rpc.sh@36 -- # jq length 00:06:10.981 ************************************ 00:06:10.981 END TEST rpc_plugins 00:06:10.981 ************************************ 00:06:10.981 05:05:00 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:10.981 00:06:10.981 real 0m0.156s 00:06:10.981 user 0m0.104s 00:06:10.981 sys 0m0.015s 00:06:10.981 05:05:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:10.981 05:05:00 -- common/autotest_common.sh@10 -- # set +x 00:06:10.981 05:05:00 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:10.981 05:05:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:10.981 05:05:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:10.981 05:05:00 -- common/autotest_common.sh@10 -- # set +x 00:06:10.981 ************************************ 00:06:10.981 START TEST rpc_trace_cmd_test 00:06:10.981 ************************************ 00:06:10.981 05:05:00 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:06:10.981 05:05:00 -- rpc/rpc.sh@40 -- # local info 00:06:10.981 05:05:00 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:10.981 05:05:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.981 05:05:00 -- common/autotest_common.sh@10 -- # set +x 00:06:10.981 05:05:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.981 05:05:00 -- rpc/rpc.sh@42 -- # info='{ 00:06:10.981 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid65888", 00:06:10.981 "tpoint_group_mask": "0x8", 00:06:10.981 "iscsi_conn": { 00:06:10.981 "mask": "0x2", 00:06:10.981 "tpoint_mask": "0x0" 00:06:10.981 }, 00:06:10.981 "scsi": { 00:06:10.981 "mask": "0x4", 00:06:10.981 "tpoint_mask": "0x0" 00:06:10.981 }, 00:06:10.981 "bdev": { 00:06:10.981 "mask": "0x8", 00:06:10.981 "tpoint_mask": "0xffffffffffffffff" 00:06:10.981 }, 00:06:10.981 "nvmf_rdma": { 00:06:10.981 "mask": "0x10", 00:06:10.981 "tpoint_mask": "0x0" 00:06:10.981 }, 00:06:10.981 "nvmf_tcp": { 00:06:10.981 "mask": "0x20", 00:06:10.981 "tpoint_mask": "0x0" 00:06:10.981 }, 00:06:10.981 "ftl": { 00:06:10.981 "mask": "0x40", 00:06:10.981 "tpoint_mask": "0x0" 00:06:10.981 }, 00:06:10.981 "blobfs": { 00:06:10.981 "mask": "0x80", 00:06:10.981 "tpoint_mask": "0x0" 00:06:10.981 }, 00:06:10.981 "dsa": { 00:06:10.981 "mask": "0x200", 00:06:10.981 "tpoint_mask": "0x0" 00:06:10.981 }, 00:06:10.981 "thread": { 00:06:10.981 "mask": "0x400", 00:06:10.981 "tpoint_mask": "0x0" 00:06:10.981 }, 00:06:10.981 "nvme_pcie": { 00:06:10.981 "mask": "0x800", 00:06:10.981 "tpoint_mask": "0x0" 00:06:10.981 }, 00:06:10.981 "iaa": { 00:06:10.981 "mask": "0x1000", 00:06:10.981 "tpoint_mask": "0x0" 00:06:10.981 }, 00:06:10.981 "nvme_tcp": { 00:06:10.981 "mask": "0x2000", 00:06:10.981 "tpoint_mask": "0x0" 00:06:10.981 }, 00:06:10.981 "bdev_nvme": { 00:06:10.981 "mask": "0x4000", 00:06:10.981 "tpoint_mask": "0x0" 00:06:10.981 } 00:06:10.981 }' 00:06:10.981 05:05:00 -- rpc/rpc.sh@43 -- # jq length 00:06:10.981 05:05:00 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:06:10.981 05:05:00 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:10.981 05:05:00 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:10.981 05:05:00 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:11.240 05:05:00 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:11.241 05:05:00 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:11.241 05:05:00 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:11.241 05:05:00 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:11.241 ************************************ 00:06:11.241 END TEST rpc_trace_cmd_test 00:06:11.241 ************************************ 00:06:11.241 05:05:00 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:11.241 00:06:11.241 real 0m0.270s 00:06:11.241 user 0m0.232s 00:06:11.241 sys 0m0.029s 00:06:11.241 05:05:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:11.241 05:05:00 -- common/autotest_common.sh@10 -- # set +x 00:06:11.241 05:05:00 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:11.241 05:05:00 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:11.241 05:05:00 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:11.241 05:05:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:11.241 05:05:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:11.241 05:05:00 -- common/autotest_common.sh@10 -- # set +x 00:06:11.241 ************************************ 00:06:11.241 START TEST rpc_daemon_integrity 00:06:11.241 ************************************ 00:06:11.241 05:05:00 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:06:11.241 05:05:00 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:11.241 05:05:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.241 05:05:00 -- common/autotest_common.sh@10 -- # set +x 00:06:11.241 05:05:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.241 05:05:00 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:11.241 05:05:00 -- rpc/rpc.sh@13 -- # jq length 00:06:11.241 05:05:01 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:11.241 05:05:01 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:11.241 05:05:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.241 05:05:01 -- common/autotest_common.sh@10 -- # set +x 00:06:11.500 05:05:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.500 05:05:01 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:11.500 05:05:01 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:11.500 05:05:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.500 05:05:01 -- common/autotest_common.sh@10 -- # set +x 00:06:11.500 05:05:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.500 05:05:01 -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:11.500 { 00:06:11.500 "name": "Malloc2", 00:06:11.500 "aliases": [ 00:06:11.500 "51322121-a15a-4b56-b719-909a69f00051" 00:06:11.500 ], 00:06:11.500 "product_name": "Malloc disk", 00:06:11.500 "block_size": 512, 00:06:11.500 "num_blocks": 16384, 00:06:11.500 "uuid": "51322121-a15a-4b56-b719-909a69f00051", 00:06:11.500 "assigned_rate_limits": { 00:06:11.500 "rw_ios_per_sec": 0, 00:06:11.500 "rw_mbytes_per_sec": 0, 00:06:11.500 "r_mbytes_per_sec": 0, 00:06:11.500 "w_mbytes_per_sec": 0 00:06:11.500 }, 00:06:11.500 "claimed": false, 00:06:11.500 "zoned": false, 00:06:11.500 "supported_io_types": { 00:06:11.500 "read": true, 00:06:11.500 "write": true, 00:06:11.500 "unmap": true, 00:06:11.500 "write_zeroes": true, 00:06:11.500 "flush": true, 00:06:11.500 "reset": true, 00:06:11.500 "compare": false, 00:06:11.500 "compare_and_write": false, 00:06:11.500 "abort": true, 00:06:11.500 "nvme_admin": false, 00:06:11.500 "nvme_io": false 00:06:11.500 }, 00:06:11.500 "memory_domains": [ 00:06:11.500 { 00:06:11.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:11.500 "dma_device_type": 2 00:06:11.500 } 00:06:11.500 ], 00:06:11.500 "driver_specific": {} 00:06:11.500 } 00:06:11.500 ]' 00:06:11.500 05:05:01 -- rpc/rpc.sh@17 -- # jq length 00:06:11.500 05:05:01 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:11.500 05:05:01 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:11.500 05:05:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.500 05:05:01 -- common/autotest_common.sh@10 -- # set +x 00:06:11.500 [2024-12-08 05:05:01.107332] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:11.500 [2024-12-08 05:05:01.107386] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:11.500 [2024-12-08 05:05:01.107401] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15ba9d0 00:06:11.500 [2024-12-08 05:05:01.107409] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:11.500 [2024-12-08 05:05:01.108650] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:11.500 [2024-12-08 05:05:01.108723] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:11.500 Passthru0 00:06:11.500 05:05:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.500 05:05:01 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:11.500 05:05:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.500 05:05:01 -- common/autotest_common.sh@10 -- # set +x 00:06:11.500 05:05:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.500 05:05:01 -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:11.500 { 00:06:11.500 "name": "Malloc2", 00:06:11.500 "aliases": [ 00:06:11.500 "51322121-a15a-4b56-b719-909a69f00051" 00:06:11.500 ], 00:06:11.500 "product_name": "Malloc disk", 00:06:11.500 "block_size": 512, 00:06:11.500 "num_blocks": 16384, 00:06:11.500 "uuid": "51322121-a15a-4b56-b719-909a69f00051", 00:06:11.500 "assigned_rate_limits": { 00:06:11.500 "rw_ios_per_sec": 0, 00:06:11.500 "rw_mbytes_per_sec": 0, 00:06:11.500 "r_mbytes_per_sec": 0, 00:06:11.500 "w_mbytes_per_sec": 0 00:06:11.500 }, 00:06:11.500 "claimed": true, 00:06:11.500 "claim_type": "exclusive_write", 00:06:11.500 "zoned": false, 00:06:11.500 "supported_io_types": { 00:06:11.500 "read": true, 00:06:11.500 "write": true, 00:06:11.500 "unmap": true, 00:06:11.500 "write_zeroes": true, 00:06:11.500 "flush": true, 00:06:11.500 "reset": true, 00:06:11.500 "compare": false, 00:06:11.500 "compare_and_write": false, 00:06:11.500 "abort": true, 00:06:11.500 "nvme_admin": false, 00:06:11.500 "nvme_io": false 00:06:11.500 }, 00:06:11.500 "memory_domains": [ 00:06:11.500 { 00:06:11.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:11.500 "dma_device_type": 2 00:06:11.500 } 00:06:11.500 ], 00:06:11.500 "driver_specific": {} 00:06:11.500 }, 00:06:11.500 { 00:06:11.500 "name": "Passthru0", 00:06:11.500 "aliases": [ 00:06:11.500 "be6a287d-49ca-577d-9d34-5fa94251f7b4" 00:06:11.500 ], 00:06:11.500 "product_name": "passthru", 00:06:11.500 "block_size": 512, 00:06:11.500 "num_blocks": 16384, 00:06:11.500 "uuid": "be6a287d-49ca-577d-9d34-5fa94251f7b4", 00:06:11.500 "assigned_rate_limits": { 00:06:11.500 "rw_ios_per_sec": 0, 00:06:11.500 "rw_mbytes_per_sec": 0, 00:06:11.500 "r_mbytes_per_sec": 0, 00:06:11.500 "w_mbytes_per_sec": 0 00:06:11.500 }, 00:06:11.500 "claimed": false, 00:06:11.500 "zoned": false, 00:06:11.500 "supported_io_types": { 00:06:11.500 "read": true, 00:06:11.500 "write": true, 00:06:11.500 "unmap": true, 00:06:11.500 "write_zeroes": true, 00:06:11.500 "flush": true, 00:06:11.500 "reset": true, 00:06:11.500 "compare": false, 00:06:11.500 "compare_and_write": false, 00:06:11.500 "abort": true, 00:06:11.500 "nvme_admin": false, 00:06:11.500 "nvme_io": false 00:06:11.510 }, 00:06:11.510 "memory_domains": [ 00:06:11.511 { 00:06:11.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:11.511 "dma_device_type": 2 00:06:11.511 } 00:06:11.511 ], 00:06:11.511 "driver_specific": { 00:06:11.511 "passthru": { 00:06:11.511 "name": "Passthru0", 00:06:11.511 "base_bdev_name": "Malloc2" 00:06:11.511 } 00:06:11.511 } 00:06:11.511 } 00:06:11.511 ]' 00:06:11.511 05:05:01 -- rpc/rpc.sh@21 -- # jq length 00:06:11.511 05:05:01 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:11.511 05:05:01 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:11.511 05:05:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.511 05:05:01 -- common/autotest_common.sh@10 -- # set +x 00:06:11.511 05:05:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.511 05:05:01 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:11.511 05:05:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.511 05:05:01 -- common/autotest_common.sh@10 -- # set +x 00:06:11.511 05:05:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.511 05:05:01 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:11.511 05:05:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.511 05:05:01 -- common/autotest_common.sh@10 -- # set +x 00:06:11.511 05:05:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.511 05:05:01 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:11.511 05:05:01 -- rpc/rpc.sh@26 -- # jq length 00:06:11.511 ************************************ 00:06:11.511 END TEST rpc_daemon_integrity 00:06:11.511 ************************************ 00:06:11.511 05:05:01 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:11.511 00:06:11.511 real 0m0.318s 00:06:11.511 user 0m0.208s 00:06:11.511 sys 0m0.042s 00:06:11.511 05:05:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:11.511 05:05:01 -- common/autotest_common.sh@10 -- # set +x 00:06:11.769 05:05:01 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:11.769 05:05:01 -- rpc/rpc.sh@84 -- # killprocess 65888 00:06:11.769 05:05:01 -- common/autotest_common.sh@936 -- # '[' -z 65888 ']' 00:06:11.769 05:05:01 -- common/autotest_common.sh@940 -- # kill -0 65888 00:06:11.769 05:05:01 -- common/autotest_common.sh@941 -- # uname 00:06:11.769 05:05:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:11.769 05:05:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65888 00:06:11.769 killing process with pid 65888 00:06:11.769 05:05:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:11.769 05:05:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:11.769 05:05:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65888' 00:06:11.769 05:05:01 -- common/autotest_common.sh@955 -- # kill 65888 00:06:11.769 05:05:01 -- common/autotest_common.sh@960 -- # wait 65888 00:06:12.028 ************************************ 00:06:12.028 END TEST rpc 00:06:12.028 ************************************ 00:06:12.028 00:06:12.028 real 0m2.788s 00:06:12.028 user 0m3.713s 00:06:12.028 sys 0m0.608s 00:06:12.028 05:05:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:12.028 05:05:01 -- common/autotest_common.sh@10 -- # set +x 00:06:12.028 05:05:01 -- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:12.028 05:05:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:12.028 05:05:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:12.028 05:05:01 -- common/autotest_common.sh@10 -- # set +x 00:06:12.028 ************************************ 00:06:12.028 START TEST rpc_client 00:06:12.028 ************************************ 00:06:12.028 05:05:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:12.028 * Looking for test storage... 00:06:12.028 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:12.028 05:05:01 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:12.028 05:05:01 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:12.028 05:05:01 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:12.028 05:05:01 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:12.028 05:05:01 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:12.028 05:05:01 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:12.028 05:05:01 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:12.028 05:05:01 -- scripts/common.sh@335 -- # IFS=.-: 00:06:12.028 05:05:01 -- scripts/common.sh@335 -- # read -ra ver1 00:06:12.028 05:05:01 -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.028 05:05:01 -- scripts/common.sh@336 -- # read -ra ver2 00:06:12.028 05:05:01 -- scripts/common.sh@337 -- # local 'op=<' 00:06:12.028 05:05:01 -- scripts/common.sh@339 -- # ver1_l=2 00:06:12.028 05:05:01 -- scripts/common.sh@340 -- # ver2_l=1 00:06:12.028 05:05:01 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:12.028 05:05:01 -- scripts/common.sh@343 -- # case "$op" in 00:06:12.028 05:05:01 -- scripts/common.sh@344 -- # : 1 00:06:12.028 05:05:01 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:12.028 05:05:01 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.028 05:05:01 -- scripts/common.sh@364 -- # decimal 1 00:06:12.028 05:05:01 -- scripts/common.sh@352 -- # local d=1 00:06:12.028 05:05:01 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.028 05:05:01 -- scripts/common.sh@354 -- # echo 1 00:06:12.028 05:05:01 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:12.028 05:05:01 -- scripts/common.sh@365 -- # decimal 2 00:06:12.028 05:05:01 -- scripts/common.sh@352 -- # local d=2 00:06:12.028 05:05:01 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.028 05:05:01 -- scripts/common.sh@354 -- # echo 2 00:06:12.028 05:05:01 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:12.028 05:05:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:12.028 05:05:01 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:12.028 05:05:01 -- scripts/common.sh@367 -- # return 0 00:06:12.028 05:05:01 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.028 05:05:01 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:12.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.028 --rc genhtml_branch_coverage=1 00:06:12.028 --rc genhtml_function_coverage=1 00:06:12.028 --rc genhtml_legend=1 00:06:12.028 --rc geninfo_all_blocks=1 00:06:12.028 --rc geninfo_unexecuted_blocks=1 00:06:12.028 00:06:12.028 ' 00:06:12.028 05:05:01 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:12.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.028 --rc genhtml_branch_coverage=1 00:06:12.028 --rc genhtml_function_coverage=1 00:06:12.028 --rc genhtml_legend=1 00:06:12.028 --rc geninfo_all_blocks=1 00:06:12.028 --rc geninfo_unexecuted_blocks=1 00:06:12.028 00:06:12.028 ' 00:06:12.028 05:05:01 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:12.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.028 --rc genhtml_branch_coverage=1 00:06:12.028 --rc genhtml_function_coverage=1 00:06:12.028 --rc genhtml_legend=1 00:06:12.028 --rc geninfo_all_blocks=1 00:06:12.028 --rc geninfo_unexecuted_blocks=1 00:06:12.029 00:06:12.029 ' 00:06:12.029 05:05:01 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:12.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.029 --rc genhtml_branch_coverage=1 00:06:12.029 --rc genhtml_function_coverage=1 00:06:12.029 --rc genhtml_legend=1 00:06:12.029 --rc geninfo_all_blocks=1 00:06:12.029 --rc geninfo_unexecuted_blocks=1 00:06:12.029 00:06:12.029 ' 00:06:12.029 05:05:01 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:12.287 OK 00:06:12.287 05:05:01 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:12.287 00:06:12.287 real 0m0.205s 00:06:12.287 user 0m0.132s 00:06:12.287 sys 0m0.081s 00:06:12.287 ************************************ 00:06:12.287 END TEST rpc_client 00:06:12.287 ************************************ 00:06:12.287 05:05:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:12.287 05:05:01 -- common/autotest_common.sh@10 -- # set +x 00:06:12.287 05:05:01 -- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:12.287 05:05:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:12.287 05:05:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:12.287 05:05:01 -- common/autotest_common.sh@10 -- # set +x 00:06:12.287 ************************************ 00:06:12.287 START TEST json_config 00:06:12.287 ************************************ 00:06:12.287 05:05:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:12.287 05:05:01 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:12.287 05:05:01 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:12.287 05:05:01 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:12.287 05:05:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:12.287 05:05:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:12.287 05:05:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:12.287 05:05:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:12.287 05:05:02 -- scripts/common.sh@335 -- # IFS=.-: 00:06:12.287 05:05:02 -- scripts/common.sh@335 -- # read -ra ver1 00:06:12.287 05:05:02 -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.287 05:05:02 -- scripts/common.sh@336 -- # read -ra ver2 00:06:12.287 05:05:02 -- scripts/common.sh@337 -- # local 'op=<' 00:06:12.287 05:05:02 -- scripts/common.sh@339 -- # ver1_l=2 00:06:12.287 05:05:02 -- scripts/common.sh@340 -- # ver2_l=1 00:06:12.287 05:05:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:12.287 05:05:02 -- scripts/common.sh@343 -- # case "$op" in 00:06:12.287 05:05:02 -- scripts/common.sh@344 -- # : 1 00:06:12.287 05:05:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:12.287 05:05:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.287 05:05:02 -- scripts/common.sh@364 -- # decimal 1 00:06:12.287 05:05:02 -- scripts/common.sh@352 -- # local d=1 00:06:12.287 05:05:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.287 05:05:02 -- scripts/common.sh@354 -- # echo 1 00:06:12.287 05:05:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:12.287 05:05:02 -- scripts/common.sh@365 -- # decimal 2 00:06:12.287 05:05:02 -- scripts/common.sh@352 -- # local d=2 00:06:12.287 05:05:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.287 05:05:02 -- scripts/common.sh@354 -- # echo 2 00:06:12.287 05:05:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:12.287 05:05:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:12.287 05:05:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:12.287 05:05:02 -- scripts/common.sh@367 -- # return 0 00:06:12.287 05:05:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.287 05:05:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:12.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.287 --rc genhtml_branch_coverage=1 00:06:12.287 --rc genhtml_function_coverage=1 00:06:12.287 --rc genhtml_legend=1 00:06:12.287 --rc geninfo_all_blocks=1 00:06:12.287 --rc geninfo_unexecuted_blocks=1 00:06:12.287 00:06:12.287 ' 00:06:12.287 05:05:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:12.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.288 --rc genhtml_branch_coverage=1 00:06:12.288 --rc genhtml_function_coverage=1 00:06:12.288 --rc genhtml_legend=1 00:06:12.288 --rc geninfo_all_blocks=1 00:06:12.288 --rc geninfo_unexecuted_blocks=1 00:06:12.288 00:06:12.288 ' 00:06:12.288 05:05:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:12.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.288 --rc genhtml_branch_coverage=1 00:06:12.288 --rc genhtml_function_coverage=1 00:06:12.288 --rc genhtml_legend=1 00:06:12.288 --rc geninfo_all_blocks=1 00:06:12.288 --rc geninfo_unexecuted_blocks=1 00:06:12.288 00:06:12.288 ' 00:06:12.288 05:05:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:12.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.288 --rc genhtml_branch_coverage=1 00:06:12.288 --rc genhtml_function_coverage=1 00:06:12.288 --rc genhtml_legend=1 00:06:12.288 --rc geninfo_all_blocks=1 00:06:12.288 --rc geninfo_unexecuted_blocks=1 00:06:12.288 00:06:12.288 ' 00:06:12.288 05:05:02 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:12.288 05:05:02 -- nvmf/common.sh@7 -- # uname -s 00:06:12.288 05:05:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:12.288 05:05:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:12.288 05:05:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:12.288 05:05:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:12.288 05:05:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:12.288 05:05:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:12.288 05:05:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:12.288 05:05:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:12.288 05:05:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:12.288 05:05:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:12.288 05:05:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:06:12.288 05:05:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:06:12.288 05:05:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:12.288 05:05:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:12.288 05:05:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:12.288 05:05:02 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:12.288 05:05:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:12.288 05:05:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:12.288 05:05:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:12.288 05:05:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.288 05:05:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.288 05:05:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.288 05:05:02 -- paths/export.sh@5 -- # export PATH 00:06:12.288 05:05:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.288 05:05:02 -- nvmf/common.sh@46 -- # : 0 00:06:12.288 05:05:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:12.288 05:05:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:12.288 05:05:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:12.288 05:05:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:12.288 05:05:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:12.288 05:05:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:12.288 05:05:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:12.288 05:05:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:12.288 05:05:02 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:06:12.288 05:05:02 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:06:12.288 05:05:02 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:06:12.288 05:05:02 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:12.288 05:05:02 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:06:12.288 05:05:02 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:06:12.288 05:05:02 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:12.288 05:05:02 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:06:12.288 05:05:02 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:12.288 INFO: JSON configuration test init 00:06:12.288 05:05:02 -- json_config/json_config.sh@32 -- # declare -A app_params 00:06:12.288 05:05:02 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:06:12.288 05:05:02 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:06:12.288 05:05:02 -- json_config/json_config.sh@43 -- # last_event_id=0 00:06:12.288 05:05:02 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:12.288 05:05:02 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:06:12.288 05:05:02 -- json_config/json_config.sh@420 -- # json_config_test_init 00:06:12.288 05:05:02 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:06:12.288 05:05:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:12.288 05:05:02 -- common/autotest_common.sh@10 -- # set +x 00:06:12.546 05:05:02 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:06:12.546 05:05:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:12.546 05:05:02 -- common/autotest_common.sh@10 -- # set +x 00:06:12.546 Waiting for target to run... 00:06:12.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:12.546 05:05:02 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:06:12.546 05:05:02 -- json_config/json_config.sh@98 -- # local app=target 00:06:12.546 05:05:02 -- json_config/json_config.sh@99 -- # shift 00:06:12.546 05:05:02 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:06:12.546 05:05:02 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:06:12.546 05:05:02 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:06:12.546 05:05:02 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:06:12.546 05:05:02 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:06:12.546 05:05:02 -- json_config/json_config.sh@111 -- # app_pid[$app]=66141 00:06:12.546 05:05:02 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:06:12.546 05:05:02 -- json_config/json_config.sh@114 -- # waitforlisten 66141 /var/tmp/spdk_tgt.sock 00:06:12.546 05:05:02 -- common/autotest_common.sh@829 -- # '[' -z 66141 ']' 00:06:12.546 05:05:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:12.546 05:05:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:12.546 05:05:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:12.546 05:05:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:12.547 05:05:02 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:12.547 05:05:02 -- common/autotest_common.sh@10 -- # set +x 00:06:12.547 [2024-12-08 05:05:02.141045] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:12.547 [2024-12-08 05:05:02.141313] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66141 ] 00:06:12.804 [2024-12-08 05:05:02.464462] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.804 [2024-12-08 05:05:02.486889] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:12.804 [2024-12-08 05:05:02.487053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.738 00:06:13.738 05:05:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:13.738 05:05:03 -- common/autotest_common.sh@862 -- # return 0 00:06:13.738 05:05:03 -- json_config/json_config.sh@115 -- # echo '' 00:06:13.738 05:05:03 -- json_config/json_config.sh@322 -- # create_accel_config 00:06:13.738 05:05:03 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:06:13.738 05:05:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:13.738 05:05:03 -- common/autotest_common.sh@10 -- # set +x 00:06:13.738 05:05:03 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:06:13.738 05:05:03 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:06:13.738 05:05:03 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:13.738 05:05:03 -- common/autotest_common.sh@10 -- # set +x 00:06:13.738 05:05:03 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:13.738 05:05:03 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:06:13.738 05:05:03 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:13.999 05:05:03 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:06:13.999 05:05:03 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:06:13.999 05:05:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:13.999 05:05:03 -- common/autotest_common.sh@10 -- # set +x 00:06:13.999 05:05:03 -- json_config/json_config.sh@48 -- # local ret=0 00:06:13.999 05:05:03 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:13.999 05:05:03 -- json_config/json_config.sh@49 -- # local enabled_types 00:06:13.999 05:05:03 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:13.999 05:05:03 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:13.999 05:05:03 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:14.256 05:05:03 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:14.256 05:05:03 -- json_config/json_config.sh@51 -- # local get_types 00:06:14.256 05:05:03 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:06:14.256 05:05:03 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:06:14.256 05:05:03 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:14.256 05:05:03 -- common/autotest_common.sh@10 -- # set +x 00:06:14.256 05:05:03 -- json_config/json_config.sh@58 -- # return 0 00:06:14.256 05:05:03 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:06:14.256 05:05:03 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:06:14.256 05:05:03 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:06:14.256 05:05:03 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:06:14.256 05:05:03 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:06:14.256 05:05:03 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:06:14.256 05:05:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:14.256 05:05:03 -- common/autotest_common.sh@10 -- # set +x 00:06:14.256 05:05:03 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:14.256 05:05:03 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:06:14.256 05:05:03 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:06:14.256 05:05:03 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:14.256 05:05:03 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:14.514 MallocForNvmf0 00:06:14.514 05:05:04 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:14.514 05:05:04 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:14.772 MallocForNvmf1 00:06:14.772 05:05:04 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:14.772 05:05:04 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:15.031 [2024-12-08 05:05:04.753424] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:15.031 05:05:04 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:15.031 05:05:04 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:15.290 05:05:05 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:15.290 05:05:05 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:15.547 05:05:05 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:15.547 05:05:05 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:15.805 05:05:05 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:15.805 05:05:05 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:16.063 [2024-12-08 05:05:05.758160] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:16.063 05:05:05 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:06:16.063 05:05:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:16.063 05:05:05 -- common/autotest_common.sh@10 -- # set +x 00:06:16.063 05:05:05 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:06:16.063 05:05:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:16.063 05:05:05 -- common/autotest_common.sh@10 -- # set +x 00:06:16.321 05:05:05 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:06:16.321 05:05:05 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:16.321 05:05:05 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:16.321 MallocBdevForConfigChangeCheck 00:06:16.321 05:05:06 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:06:16.321 05:05:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:16.321 05:05:06 -- common/autotest_common.sh@10 -- # set +x 00:06:16.579 05:05:06 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:06:16.579 05:05:06 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:16.838 INFO: shutting down applications... 00:06:16.838 05:05:06 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:06:16.838 05:05:06 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:06:16.838 05:05:06 -- json_config/json_config.sh@431 -- # json_config_clear target 00:06:16.838 05:05:06 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:06:16.838 05:05:06 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:17.097 Calling clear_iscsi_subsystem 00:06:17.097 Calling clear_nvmf_subsystem 00:06:17.097 Calling clear_nbd_subsystem 00:06:17.097 Calling clear_ublk_subsystem 00:06:17.097 Calling clear_vhost_blk_subsystem 00:06:17.097 Calling clear_vhost_scsi_subsystem 00:06:17.097 Calling clear_scheduler_subsystem 00:06:17.097 Calling clear_bdev_subsystem 00:06:17.097 Calling clear_accel_subsystem 00:06:17.097 Calling clear_vmd_subsystem 00:06:17.097 Calling clear_sock_subsystem 00:06:17.097 Calling clear_iobuf_subsystem 00:06:17.097 05:05:06 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:06:17.097 05:05:06 -- json_config/json_config.sh@396 -- # count=100 00:06:17.097 05:05:06 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:06:17.097 05:05:06 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:17.097 05:05:06 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:17.097 05:05:06 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:06:17.666 05:05:07 -- json_config/json_config.sh@398 -- # break 00:06:17.666 05:05:07 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:06:17.666 05:05:07 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:06:17.666 05:05:07 -- json_config/json_config.sh@120 -- # local app=target 00:06:17.666 05:05:07 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:06:17.666 05:05:07 -- json_config/json_config.sh@124 -- # [[ -n 66141 ]] 00:06:17.666 05:05:07 -- json_config/json_config.sh@127 -- # kill -SIGINT 66141 00:06:17.666 05:05:07 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:06:17.666 05:05:07 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:06:17.666 05:05:07 -- json_config/json_config.sh@130 -- # kill -0 66141 00:06:17.666 05:05:07 -- json_config/json_config.sh@134 -- # sleep 0.5 00:06:17.925 05:05:07 -- json_config/json_config.sh@129 -- # (( i++ )) 00:06:17.926 05:05:07 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:06:17.926 05:05:07 -- json_config/json_config.sh@130 -- # kill -0 66141 00:06:18.203 05:05:07 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:06:18.203 05:05:07 -- json_config/json_config.sh@132 -- # break 00:06:18.203 SPDK target shutdown done 00:06:18.203 05:05:07 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:06:18.203 05:05:07 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:06:18.203 05:05:07 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:06:18.203 INFO: relaunching applications... 00:06:18.203 05:05:07 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:18.203 05:05:07 -- json_config/json_config.sh@98 -- # local app=target 00:06:18.203 05:05:07 -- json_config/json_config.sh@99 -- # shift 00:06:18.203 05:05:07 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:06:18.203 05:05:07 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:06:18.203 05:05:07 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:06:18.203 05:05:07 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:06:18.203 05:05:07 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:06:18.203 05:05:07 -- json_config/json_config.sh@111 -- # app_pid[$app]=66332 00:06:18.203 05:05:07 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:18.203 05:05:07 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:06:18.203 Waiting for target to run... 00:06:18.203 05:05:07 -- json_config/json_config.sh@114 -- # waitforlisten 66332 /var/tmp/spdk_tgt.sock 00:06:18.203 05:05:07 -- common/autotest_common.sh@829 -- # '[' -z 66332 ']' 00:06:18.203 05:05:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:18.203 05:05:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:18.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:18.203 05:05:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:18.203 05:05:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:18.203 05:05:07 -- common/autotest_common.sh@10 -- # set +x 00:06:18.203 [2024-12-08 05:05:07.774302] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:18.203 [2024-12-08 05:05:07.774401] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66332 ] 00:06:18.511 [2024-12-08 05:05:08.076190] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.511 [2024-12-08 05:05:08.096487] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:18.511 [2024-12-08 05:05:08.096627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.791 [2024-12-08 05:05:08.385835] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:18.791 [2024-12-08 05:05:08.417932] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:19.050 00:06:19.050 INFO: Checking if target configuration is the same... 00:06:19.050 05:05:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:19.050 05:05:08 -- common/autotest_common.sh@862 -- # return 0 00:06:19.050 05:05:08 -- json_config/json_config.sh@115 -- # echo '' 00:06:19.050 05:05:08 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:06:19.050 05:05:08 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:19.050 05:05:08 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:19.050 05:05:08 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:06:19.050 05:05:08 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:19.050 + '[' 2 -ne 2 ']' 00:06:19.050 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:19.050 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:19.050 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:19.050 +++ basename /dev/fd/62 00:06:19.050 ++ mktemp /tmp/62.XXX 00:06:19.050 + tmp_file_1=/tmp/62.JAj 00:06:19.050 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:19.050 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:19.050 + tmp_file_2=/tmp/spdk_tgt_config.json.TYg 00:06:19.050 + ret=0 00:06:19.050 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:19.309 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:19.569 + diff -u /tmp/62.JAj /tmp/spdk_tgt_config.json.TYg 00:06:19.569 INFO: JSON config files are the same 00:06:19.569 + echo 'INFO: JSON config files are the same' 00:06:19.569 + rm /tmp/62.JAj /tmp/spdk_tgt_config.json.TYg 00:06:19.569 + exit 0 00:06:19.569 INFO: changing configuration and checking if this can be detected... 00:06:19.569 05:05:09 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:06:19.569 05:05:09 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:19.569 05:05:09 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:19.569 05:05:09 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:19.828 05:05:09 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:19.828 05:05:09 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:06:19.828 05:05:09 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:19.828 + '[' 2 -ne 2 ']' 00:06:19.828 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:19.828 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:19.828 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:19.828 +++ basename /dev/fd/62 00:06:19.828 ++ mktemp /tmp/62.XXX 00:06:19.828 + tmp_file_1=/tmp/62.gv5 00:06:19.828 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:19.828 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:19.828 + tmp_file_2=/tmp/spdk_tgt_config.json.YPJ 00:06:19.828 + ret=0 00:06:19.828 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:20.088 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:20.348 + diff -u /tmp/62.gv5 /tmp/spdk_tgt_config.json.YPJ 00:06:20.348 + ret=1 00:06:20.348 + echo '=== Start of file: /tmp/62.gv5 ===' 00:06:20.348 + cat /tmp/62.gv5 00:06:20.348 + echo '=== End of file: /tmp/62.gv5 ===' 00:06:20.348 + echo '' 00:06:20.348 + echo '=== Start of file: /tmp/spdk_tgt_config.json.YPJ ===' 00:06:20.348 + cat /tmp/spdk_tgt_config.json.YPJ 00:06:20.348 + echo '=== End of file: /tmp/spdk_tgt_config.json.YPJ ===' 00:06:20.348 + echo '' 00:06:20.348 + rm /tmp/62.gv5 /tmp/spdk_tgt_config.json.YPJ 00:06:20.348 + exit 1 00:06:20.348 INFO: configuration change detected. 00:06:20.348 05:05:09 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:06:20.348 05:05:09 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:06:20.348 05:05:09 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:06:20.348 05:05:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:20.348 05:05:09 -- common/autotest_common.sh@10 -- # set +x 00:06:20.348 05:05:09 -- json_config/json_config.sh@360 -- # local ret=0 00:06:20.348 05:05:09 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:06:20.348 05:05:09 -- json_config/json_config.sh@370 -- # [[ -n 66332 ]] 00:06:20.348 05:05:09 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:06:20.348 05:05:09 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:06:20.348 05:05:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:20.348 05:05:09 -- common/autotest_common.sh@10 -- # set +x 00:06:20.348 05:05:09 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:06:20.348 05:05:09 -- json_config/json_config.sh@246 -- # uname -s 00:06:20.348 05:05:09 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:06:20.348 05:05:09 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:06:20.348 05:05:09 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:06:20.348 05:05:09 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:06:20.348 05:05:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:20.348 05:05:09 -- common/autotest_common.sh@10 -- # set +x 00:06:20.348 05:05:09 -- json_config/json_config.sh@376 -- # killprocess 66332 00:06:20.348 05:05:09 -- common/autotest_common.sh@936 -- # '[' -z 66332 ']' 00:06:20.348 05:05:09 -- common/autotest_common.sh@940 -- # kill -0 66332 00:06:20.348 05:05:09 -- common/autotest_common.sh@941 -- # uname 00:06:20.348 05:05:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:20.348 05:05:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66332 00:06:20.348 killing process with pid 66332 00:06:20.348 05:05:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:20.348 05:05:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:20.348 05:05:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66332' 00:06:20.348 05:05:09 -- common/autotest_common.sh@955 -- # kill 66332 00:06:20.348 05:05:09 -- common/autotest_common.sh@960 -- # wait 66332 00:06:20.608 05:05:10 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:20.608 05:05:10 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:06:20.608 05:05:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:20.608 05:05:10 -- common/autotest_common.sh@10 -- # set +x 00:06:20.608 05:05:10 -- json_config/json_config.sh@381 -- # return 0 00:06:20.608 INFO: Success 00:06:20.608 05:05:10 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:06:20.608 ************************************ 00:06:20.608 END TEST json_config 00:06:20.608 ************************************ 00:06:20.608 00:06:20.608 real 0m8.304s 00:06:20.608 user 0m12.094s 00:06:20.608 sys 0m1.463s 00:06:20.608 05:05:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:20.608 05:05:10 -- common/autotest_common.sh@10 -- # set +x 00:06:20.608 05:05:10 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:20.608 05:05:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:20.608 05:05:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:20.608 05:05:10 -- common/autotest_common.sh@10 -- # set +x 00:06:20.608 ************************************ 00:06:20.608 START TEST json_config_extra_key 00:06:20.608 ************************************ 00:06:20.608 05:05:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:20.608 05:05:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:20.608 05:05:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:20.608 05:05:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:20.608 05:05:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:20.608 05:05:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:20.608 05:05:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:20.608 05:05:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:20.609 05:05:10 -- scripts/common.sh@335 -- # IFS=.-: 00:06:20.609 05:05:10 -- scripts/common.sh@335 -- # read -ra ver1 00:06:20.609 05:05:10 -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.609 05:05:10 -- scripts/common.sh@336 -- # read -ra ver2 00:06:20.609 05:05:10 -- scripts/common.sh@337 -- # local 'op=<' 00:06:20.609 05:05:10 -- scripts/common.sh@339 -- # ver1_l=2 00:06:20.609 05:05:10 -- scripts/common.sh@340 -- # ver2_l=1 00:06:20.609 05:05:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:20.609 05:05:10 -- scripts/common.sh@343 -- # case "$op" in 00:06:20.609 05:05:10 -- scripts/common.sh@344 -- # : 1 00:06:20.609 05:05:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:20.609 05:05:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.609 05:05:10 -- scripts/common.sh@364 -- # decimal 1 00:06:20.609 05:05:10 -- scripts/common.sh@352 -- # local d=1 00:06:20.609 05:05:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.609 05:05:10 -- scripts/common.sh@354 -- # echo 1 00:06:20.609 05:05:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:20.869 05:05:10 -- scripts/common.sh@365 -- # decimal 2 00:06:20.869 05:05:10 -- scripts/common.sh@352 -- # local d=2 00:06:20.869 05:05:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.869 05:05:10 -- scripts/common.sh@354 -- # echo 2 00:06:20.869 05:05:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:20.869 05:05:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:20.869 05:05:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:20.869 05:05:10 -- scripts/common.sh@367 -- # return 0 00:06:20.869 05:05:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.869 05:05:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:20.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.869 --rc genhtml_branch_coverage=1 00:06:20.869 --rc genhtml_function_coverage=1 00:06:20.869 --rc genhtml_legend=1 00:06:20.869 --rc geninfo_all_blocks=1 00:06:20.869 --rc geninfo_unexecuted_blocks=1 00:06:20.869 00:06:20.869 ' 00:06:20.869 05:05:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:20.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.869 --rc genhtml_branch_coverage=1 00:06:20.869 --rc genhtml_function_coverage=1 00:06:20.869 --rc genhtml_legend=1 00:06:20.869 --rc geninfo_all_blocks=1 00:06:20.869 --rc geninfo_unexecuted_blocks=1 00:06:20.869 00:06:20.869 ' 00:06:20.869 05:05:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:20.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.869 --rc genhtml_branch_coverage=1 00:06:20.869 --rc genhtml_function_coverage=1 00:06:20.869 --rc genhtml_legend=1 00:06:20.869 --rc geninfo_all_blocks=1 00:06:20.869 --rc geninfo_unexecuted_blocks=1 00:06:20.869 00:06:20.869 ' 00:06:20.869 05:05:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:20.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.869 --rc genhtml_branch_coverage=1 00:06:20.869 --rc genhtml_function_coverage=1 00:06:20.869 --rc genhtml_legend=1 00:06:20.869 --rc geninfo_all_blocks=1 00:06:20.869 --rc geninfo_unexecuted_blocks=1 00:06:20.869 00:06:20.869 ' 00:06:20.869 05:05:10 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:20.869 05:05:10 -- nvmf/common.sh@7 -- # uname -s 00:06:20.869 05:05:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:20.869 05:05:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:20.869 05:05:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:20.869 05:05:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:20.869 05:05:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:20.869 05:05:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:20.869 05:05:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:20.869 05:05:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:20.869 05:05:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:20.869 05:05:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:20.869 05:05:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:06:20.869 05:05:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:06:20.869 05:05:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:20.869 05:05:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:20.869 05:05:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:20.869 05:05:10 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:20.869 05:05:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:20.869 05:05:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:20.869 05:05:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:20.869 05:05:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.869 05:05:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.869 05:05:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.869 05:05:10 -- paths/export.sh@5 -- # export PATH 00:06:20.870 05:05:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.870 05:05:10 -- nvmf/common.sh@46 -- # : 0 00:06:20.870 05:05:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:20.870 05:05:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:20.870 05:05:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:20.870 05:05:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:20.870 05:05:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:20.870 05:05:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:20.870 05:05:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:20.870 05:05:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:20.870 05:05:10 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:06:20.870 05:05:10 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:06:20.870 05:05:10 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:20.870 05:05:10 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:06:20.870 05:05:10 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:20.870 05:05:10 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:06:20.870 05:05:10 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:20.870 05:05:10 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:06:20.870 05:05:10 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:20.870 05:05:10 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:06:20.870 INFO: launching applications... 00:06:20.870 05:05:10 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:20.870 05:05:10 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:06:20.870 05:05:10 -- json_config/json_config_extra_key.sh@25 -- # shift 00:06:20.870 05:05:10 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:06:20.870 05:05:10 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:06:20.870 05:05:10 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=66479 00:06:20.870 05:05:10 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:20.870 Waiting for target to run... 00:06:20.870 05:05:10 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:06:20.870 05:05:10 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 66479 /var/tmp/spdk_tgt.sock 00:06:20.870 05:05:10 -- common/autotest_common.sh@829 -- # '[' -z 66479 ']' 00:06:20.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:20.870 05:05:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:20.870 05:05:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:20.870 05:05:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:20.870 05:05:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:20.870 05:05:10 -- common/autotest_common.sh@10 -- # set +x 00:06:20.870 [2024-12-08 05:05:10.488081] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:20.870 [2024-12-08 05:05:10.488174] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66479 ] 00:06:21.129 [2024-12-08 05:05:10.815344] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.129 [2024-12-08 05:05:10.839493] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:21.129 [2024-12-08 05:05:10.839645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.066 00:06:22.066 INFO: shutting down applications... 00:06:22.066 05:05:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:22.066 05:05:11 -- common/autotest_common.sh@862 -- # return 0 00:06:22.067 05:05:11 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:06:22.067 05:05:11 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:06:22.067 05:05:11 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:06:22.067 05:05:11 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:06:22.067 05:05:11 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:06:22.067 05:05:11 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 66479 ]] 00:06:22.067 05:05:11 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 66479 00:06:22.067 05:05:11 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:06:22.067 05:05:11 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:06:22.067 05:05:11 -- json_config/json_config_extra_key.sh@50 -- # kill -0 66479 00:06:22.067 05:05:11 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:06:22.325 05:05:12 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:06:22.325 05:05:12 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:06:22.325 05:05:12 -- json_config/json_config_extra_key.sh@50 -- # kill -0 66479 00:06:22.325 SPDK target shutdown done 00:06:22.325 Success 00:06:22.325 05:05:12 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:06:22.325 05:05:12 -- json_config/json_config_extra_key.sh@52 -- # break 00:06:22.325 05:05:12 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:06:22.325 05:05:12 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:06:22.325 05:05:12 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:06:22.325 00:06:22.325 real 0m1.781s 00:06:22.325 user 0m1.611s 00:06:22.325 sys 0m0.353s 00:06:22.325 ************************************ 00:06:22.325 END TEST json_config_extra_key 00:06:22.325 ************************************ 00:06:22.325 05:05:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:22.325 05:05:12 -- common/autotest_common.sh@10 -- # set +x 00:06:22.325 05:05:12 -- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:22.325 05:05:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:22.325 05:05:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:22.325 05:05:12 -- common/autotest_common.sh@10 -- # set +x 00:06:22.325 ************************************ 00:06:22.325 START TEST alias_rpc 00:06:22.325 ************************************ 00:06:22.325 05:05:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:22.583 * Looking for test storage... 00:06:22.583 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:22.583 05:05:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:22.583 05:05:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:22.583 05:05:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:22.583 05:05:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:22.583 05:05:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:22.583 05:05:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:22.583 05:05:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:22.583 05:05:12 -- scripts/common.sh@335 -- # IFS=.-: 00:06:22.583 05:05:12 -- scripts/common.sh@335 -- # read -ra ver1 00:06:22.583 05:05:12 -- scripts/common.sh@336 -- # IFS=.-: 00:06:22.583 05:05:12 -- scripts/common.sh@336 -- # read -ra ver2 00:06:22.583 05:05:12 -- scripts/common.sh@337 -- # local 'op=<' 00:06:22.583 05:05:12 -- scripts/common.sh@339 -- # ver1_l=2 00:06:22.583 05:05:12 -- scripts/common.sh@340 -- # ver2_l=1 00:06:22.583 05:05:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:22.583 05:05:12 -- scripts/common.sh@343 -- # case "$op" in 00:06:22.583 05:05:12 -- scripts/common.sh@344 -- # : 1 00:06:22.583 05:05:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:22.583 05:05:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:22.583 05:05:12 -- scripts/common.sh@364 -- # decimal 1 00:06:22.583 05:05:12 -- scripts/common.sh@352 -- # local d=1 00:06:22.583 05:05:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:22.583 05:05:12 -- scripts/common.sh@354 -- # echo 1 00:06:22.583 05:05:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:22.583 05:05:12 -- scripts/common.sh@365 -- # decimal 2 00:06:22.583 05:05:12 -- scripts/common.sh@352 -- # local d=2 00:06:22.583 05:05:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:22.583 05:05:12 -- scripts/common.sh@354 -- # echo 2 00:06:22.583 05:05:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:22.583 05:05:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:22.583 05:05:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:22.583 05:05:12 -- scripts/common.sh@367 -- # return 0 00:06:22.583 05:05:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:22.583 05:05:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:22.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.583 --rc genhtml_branch_coverage=1 00:06:22.583 --rc genhtml_function_coverage=1 00:06:22.583 --rc genhtml_legend=1 00:06:22.583 --rc geninfo_all_blocks=1 00:06:22.583 --rc geninfo_unexecuted_blocks=1 00:06:22.583 00:06:22.583 ' 00:06:22.583 05:05:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:22.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.583 --rc genhtml_branch_coverage=1 00:06:22.583 --rc genhtml_function_coverage=1 00:06:22.583 --rc genhtml_legend=1 00:06:22.583 --rc geninfo_all_blocks=1 00:06:22.583 --rc geninfo_unexecuted_blocks=1 00:06:22.583 00:06:22.583 ' 00:06:22.583 05:05:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:22.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.583 --rc genhtml_branch_coverage=1 00:06:22.583 --rc genhtml_function_coverage=1 00:06:22.583 --rc genhtml_legend=1 00:06:22.583 --rc geninfo_all_blocks=1 00:06:22.583 --rc geninfo_unexecuted_blocks=1 00:06:22.583 00:06:22.583 ' 00:06:22.583 05:05:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:22.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.583 --rc genhtml_branch_coverage=1 00:06:22.583 --rc genhtml_function_coverage=1 00:06:22.583 --rc genhtml_legend=1 00:06:22.583 --rc geninfo_all_blocks=1 00:06:22.583 --rc geninfo_unexecuted_blocks=1 00:06:22.583 00:06:22.583 ' 00:06:22.583 05:05:12 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:22.583 05:05:12 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=66551 00:06:22.583 05:05:12 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:22.583 05:05:12 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 66551 00:06:22.583 05:05:12 -- common/autotest_common.sh@829 -- # '[' -z 66551 ']' 00:06:22.583 05:05:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.583 05:05:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:22.583 05:05:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.583 05:05:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:22.583 05:05:12 -- common/autotest_common.sh@10 -- # set +x 00:06:22.583 [2024-12-08 05:05:12.324554] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:22.583 [2024-12-08 05:05:12.325107] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66551 ] 00:06:22.841 [2024-12-08 05:05:12.465163] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.841 [2024-12-08 05:05:12.502106] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:22.841 [2024-12-08 05:05:12.502443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.773 05:05:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:23.773 05:05:13 -- common/autotest_common.sh@862 -- # return 0 00:06:23.773 05:05:13 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:24.032 05:05:13 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 66551 00:06:24.032 05:05:13 -- common/autotest_common.sh@936 -- # '[' -z 66551 ']' 00:06:24.032 05:05:13 -- common/autotest_common.sh@940 -- # kill -0 66551 00:06:24.032 05:05:13 -- common/autotest_common.sh@941 -- # uname 00:06:24.032 05:05:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:24.032 05:05:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66551 00:06:24.032 killing process with pid 66551 00:06:24.032 05:05:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:24.032 05:05:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:24.032 05:05:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66551' 00:06:24.032 05:05:13 -- common/autotest_common.sh@955 -- # kill 66551 00:06:24.032 05:05:13 -- common/autotest_common.sh@960 -- # wait 66551 00:06:24.290 ************************************ 00:06:24.290 END TEST alias_rpc 00:06:24.290 ************************************ 00:06:24.290 00:06:24.290 real 0m1.768s 00:06:24.290 user 0m2.139s 00:06:24.290 sys 0m0.328s 00:06:24.290 05:05:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:24.290 05:05:13 -- common/autotest_common.sh@10 -- # set +x 00:06:24.290 05:05:13 -- spdk/autotest.sh@169 -- # [[ 0 -eq 0 ]] 00:06:24.290 05:05:13 -- spdk/autotest.sh@170 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:24.290 05:05:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:24.290 05:05:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:24.290 05:05:13 -- common/autotest_common.sh@10 -- # set +x 00:06:24.290 ************************************ 00:06:24.290 START TEST spdkcli_tcp 00:06:24.290 ************************************ 00:06:24.290 05:05:13 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:24.290 * Looking for test storage... 00:06:24.290 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:24.290 05:05:13 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:24.290 05:05:13 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:24.290 05:05:13 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:24.290 05:05:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:24.290 05:05:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:24.290 05:05:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:24.290 05:05:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:24.290 05:05:14 -- scripts/common.sh@335 -- # IFS=.-: 00:06:24.290 05:05:14 -- scripts/common.sh@335 -- # read -ra ver1 00:06:24.290 05:05:14 -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.290 05:05:14 -- scripts/common.sh@336 -- # read -ra ver2 00:06:24.290 05:05:14 -- scripts/common.sh@337 -- # local 'op=<' 00:06:24.290 05:05:14 -- scripts/common.sh@339 -- # ver1_l=2 00:06:24.290 05:05:14 -- scripts/common.sh@340 -- # ver2_l=1 00:06:24.290 05:05:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:24.290 05:05:14 -- scripts/common.sh@343 -- # case "$op" in 00:06:24.290 05:05:14 -- scripts/common.sh@344 -- # : 1 00:06:24.290 05:05:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:24.290 05:05:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.290 05:05:14 -- scripts/common.sh@364 -- # decimal 1 00:06:24.290 05:05:14 -- scripts/common.sh@352 -- # local d=1 00:06:24.290 05:05:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.290 05:05:14 -- scripts/common.sh@354 -- # echo 1 00:06:24.290 05:05:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:24.290 05:05:14 -- scripts/common.sh@365 -- # decimal 2 00:06:24.549 05:05:14 -- scripts/common.sh@352 -- # local d=2 00:06:24.549 05:05:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.549 05:05:14 -- scripts/common.sh@354 -- # echo 2 00:06:24.549 05:05:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:24.549 05:05:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:24.549 05:05:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:24.550 05:05:14 -- scripts/common.sh@367 -- # return 0 00:06:24.550 05:05:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.550 05:05:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:24.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.550 --rc genhtml_branch_coverage=1 00:06:24.550 --rc genhtml_function_coverage=1 00:06:24.550 --rc genhtml_legend=1 00:06:24.550 --rc geninfo_all_blocks=1 00:06:24.550 --rc geninfo_unexecuted_blocks=1 00:06:24.550 00:06:24.550 ' 00:06:24.550 05:05:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:24.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.550 --rc genhtml_branch_coverage=1 00:06:24.550 --rc genhtml_function_coverage=1 00:06:24.550 --rc genhtml_legend=1 00:06:24.550 --rc geninfo_all_blocks=1 00:06:24.550 --rc geninfo_unexecuted_blocks=1 00:06:24.550 00:06:24.550 ' 00:06:24.550 05:05:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:24.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.550 --rc genhtml_branch_coverage=1 00:06:24.550 --rc genhtml_function_coverage=1 00:06:24.550 --rc genhtml_legend=1 00:06:24.550 --rc geninfo_all_blocks=1 00:06:24.550 --rc geninfo_unexecuted_blocks=1 00:06:24.550 00:06:24.550 ' 00:06:24.550 05:05:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:24.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.550 --rc genhtml_branch_coverage=1 00:06:24.550 --rc genhtml_function_coverage=1 00:06:24.550 --rc genhtml_legend=1 00:06:24.550 --rc geninfo_all_blocks=1 00:06:24.550 --rc geninfo_unexecuted_blocks=1 00:06:24.550 00:06:24.550 ' 00:06:24.550 05:05:14 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:24.550 05:05:14 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:24.550 05:05:14 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:24.550 05:05:14 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:24.550 05:05:14 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:24.550 05:05:14 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:24.550 05:05:14 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:24.550 05:05:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:24.550 05:05:14 -- common/autotest_common.sh@10 -- # set +x 00:06:24.550 05:05:14 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=66634 00:06:24.550 05:05:14 -- spdkcli/tcp.sh@27 -- # waitforlisten 66634 00:06:24.550 05:05:14 -- common/autotest_common.sh@829 -- # '[' -z 66634 ']' 00:06:24.550 05:05:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.550 05:05:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:24.550 05:05:14 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:24.550 05:05:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.550 05:05:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:24.550 05:05:14 -- common/autotest_common.sh@10 -- # set +x 00:06:24.550 [2024-12-08 05:05:14.136560] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:24.550 [2024-12-08 05:05:14.137081] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66634 ] 00:06:24.550 [2024-12-08 05:05:14.272796] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:24.550 [2024-12-08 05:05:14.308741] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:24.550 [2024-12-08 05:05:14.309208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.550 [2024-12-08 05:05:14.309219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.481 05:05:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:25.481 05:05:15 -- common/autotest_common.sh@862 -- # return 0 00:06:25.481 05:05:15 -- spdkcli/tcp.sh@31 -- # socat_pid=66651 00:06:25.481 05:05:15 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:25.481 05:05:15 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:25.738 [ 00:06:25.738 "bdev_malloc_delete", 00:06:25.738 "bdev_malloc_create", 00:06:25.738 "bdev_null_resize", 00:06:25.738 "bdev_null_delete", 00:06:25.738 "bdev_null_create", 00:06:25.738 "bdev_nvme_cuse_unregister", 00:06:25.738 "bdev_nvme_cuse_register", 00:06:25.738 "bdev_opal_new_user", 00:06:25.738 "bdev_opal_set_lock_state", 00:06:25.738 "bdev_opal_delete", 00:06:25.738 "bdev_opal_get_info", 00:06:25.738 "bdev_opal_create", 00:06:25.738 "bdev_nvme_opal_revert", 00:06:25.738 "bdev_nvme_opal_init", 00:06:25.738 "bdev_nvme_send_cmd", 00:06:25.738 "bdev_nvme_get_path_iostat", 00:06:25.738 "bdev_nvme_get_mdns_discovery_info", 00:06:25.738 "bdev_nvme_stop_mdns_discovery", 00:06:25.738 "bdev_nvme_start_mdns_discovery", 00:06:25.738 "bdev_nvme_set_multipath_policy", 00:06:25.738 "bdev_nvme_set_preferred_path", 00:06:25.738 "bdev_nvme_get_io_paths", 00:06:25.738 "bdev_nvme_remove_error_injection", 00:06:25.738 "bdev_nvme_add_error_injection", 00:06:25.738 "bdev_nvme_get_discovery_info", 00:06:25.738 "bdev_nvme_stop_discovery", 00:06:25.738 "bdev_nvme_start_discovery", 00:06:25.738 "bdev_nvme_get_controller_health_info", 00:06:25.738 "bdev_nvme_disable_controller", 00:06:25.738 "bdev_nvme_enable_controller", 00:06:25.738 "bdev_nvme_reset_controller", 00:06:25.738 "bdev_nvme_get_transport_statistics", 00:06:25.738 "bdev_nvme_apply_firmware", 00:06:25.738 "bdev_nvme_detach_controller", 00:06:25.738 "bdev_nvme_get_controllers", 00:06:25.739 "bdev_nvme_attach_controller", 00:06:25.739 "bdev_nvme_set_hotplug", 00:06:25.739 "bdev_nvme_set_options", 00:06:25.739 "bdev_passthru_delete", 00:06:25.739 "bdev_passthru_create", 00:06:25.739 "bdev_lvol_grow_lvstore", 00:06:25.739 "bdev_lvol_get_lvols", 00:06:25.739 "bdev_lvol_get_lvstores", 00:06:25.739 "bdev_lvol_delete", 00:06:25.739 "bdev_lvol_set_read_only", 00:06:25.739 "bdev_lvol_resize", 00:06:25.739 "bdev_lvol_decouple_parent", 00:06:25.739 "bdev_lvol_inflate", 00:06:25.739 "bdev_lvol_rename", 00:06:25.739 "bdev_lvol_clone_bdev", 00:06:25.739 "bdev_lvol_clone", 00:06:25.739 "bdev_lvol_snapshot", 00:06:25.739 "bdev_lvol_create", 00:06:25.739 "bdev_lvol_delete_lvstore", 00:06:25.739 "bdev_lvol_rename_lvstore", 00:06:25.739 "bdev_lvol_create_lvstore", 00:06:25.739 "bdev_raid_set_options", 00:06:25.739 "bdev_raid_remove_base_bdev", 00:06:25.739 "bdev_raid_add_base_bdev", 00:06:25.739 "bdev_raid_delete", 00:06:25.739 "bdev_raid_create", 00:06:25.739 "bdev_raid_get_bdevs", 00:06:25.739 "bdev_error_inject_error", 00:06:25.739 "bdev_error_delete", 00:06:25.739 "bdev_error_create", 00:06:25.739 "bdev_split_delete", 00:06:25.739 "bdev_split_create", 00:06:25.739 "bdev_delay_delete", 00:06:25.739 "bdev_delay_create", 00:06:25.739 "bdev_delay_update_latency", 00:06:25.739 "bdev_zone_block_delete", 00:06:25.739 "bdev_zone_block_create", 00:06:25.739 "blobfs_create", 00:06:25.739 "blobfs_detect", 00:06:25.739 "blobfs_set_cache_size", 00:06:25.739 "bdev_aio_delete", 00:06:25.739 "bdev_aio_rescan", 00:06:25.739 "bdev_aio_create", 00:06:25.739 "bdev_ftl_set_property", 00:06:25.739 "bdev_ftl_get_properties", 00:06:25.739 "bdev_ftl_get_stats", 00:06:25.739 "bdev_ftl_unmap", 00:06:25.739 "bdev_ftl_unload", 00:06:25.739 "bdev_ftl_delete", 00:06:25.739 "bdev_ftl_load", 00:06:25.739 "bdev_ftl_create", 00:06:25.739 "bdev_virtio_attach_controller", 00:06:25.739 "bdev_virtio_scsi_get_devices", 00:06:25.739 "bdev_virtio_detach_controller", 00:06:25.739 "bdev_virtio_blk_set_hotplug", 00:06:25.739 "bdev_iscsi_delete", 00:06:25.739 "bdev_iscsi_create", 00:06:25.739 "bdev_iscsi_set_options", 00:06:25.739 "bdev_uring_delete", 00:06:25.739 "bdev_uring_create", 00:06:25.739 "accel_error_inject_error", 00:06:25.739 "ioat_scan_accel_module", 00:06:25.739 "dsa_scan_accel_module", 00:06:25.739 "iaa_scan_accel_module", 00:06:25.739 "iscsi_set_options", 00:06:25.739 "iscsi_get_auth_groups", 00:06:25.739 "iscsi_auth_group_remove_secret", 00:06:25.739 "iscsi_auth_group_add_secret", 00:06:25.739 "iscsi_delete_auth_group", 00:06:25.739 "iscsi_create_auth_group", 00:06:25.739 "iscsi_set_discovery_auth", 00:06:25.739 "iscsi_get_options", 00:06:25.739 "iscsi_target_node_request_logout", 00:06:25.739 "iscsi_target_node_set_redirect", 00:06:25.739 "iscsi_target_node_set_auth", 00:06:25.739 "iscsi_target_node_add_lun", 00:06:25.739 "iscsi_get_connections", 00:06:25.739 "iscsi_portal_group_set_auth", 00:06:25.739 "iscsi_start_portal_group", 00:06:25.739 "iscsi_delete_portal_group", 00:06:25.739 "iscsi_create_portal_group", 00:06:25.739 "iscsi_get_portal_groups", 00:06:25.739 "iscsi_delete_target_node", 00:06:25.739 "iscsi_target_node_remove_pg_ig_maps", 00:06:25.739 "iscsi_target_node_add_pg_ig_maps", 00:06:25.739 "iscsi_create_target_node", 00:06:25.739 "iscsi_get_target_nodes", 00:06:25.739 "iscsi_delete_initiator_group", 00:06:25.739 "iscsi_initiator_group_remove_initiators", 00:06:25.739 "iscsi_initiator_group_add_initiators", 00:06:25.739 "iscsi_create_initiator_group", 00:06:25.739 "iscsi_get_initiator_groups", 00:06:25.739 "nvmf_set_crdt", 00:06:25.739 "nvmf_set_config", 00:06:25.739 "nvmf_set_max_subsystems", 00:06:25.739 "nvmf_subsystem_get_listeners", 00:06:25.739 "nvmf_subsystem_get_qpairs", 00:06:25.739 "nvmf_subsystem_get_controllers", 00:06:25.739 "nvmf_get_stats", 00:06:25.739 "nvmf_get_transports", 00:06:25.739 "nvmf_create_transport", 00:06:25.739 "nvmf_get_targets", 00:06:25.739 "nvmf_delete_target", 00:06:25.739 "nvmf_create_target", 00:06:25.739 "nvmf_subsystem_allow_any_host", 00:06:25.739 "nvmf_subsystem_remove_host", 00:06:25.739 "nvmf_subsystem_add_host", 00:06:25.739 "nvmf_subsystem_remove_ns", 00:06:25.739 "nvmf_subsystem_add_ns", 00:06:25.739 "nvmf_subsystem_listener_set_ana_state", 00:06:25.739 "nvmf_discovery_get_referrals", 00:06:25.739 "nvmf_discovery_remove_referral", 00:06:25.739 "nvmf_discovery_add_referral", 00:06:25.739 "nvmf_subsystem_remove_listener", 00:06:25.739 "nvmf_subsystem_add_listener", 00:06:25.739 "nvmf_delete_subsystem", 00:06:25.739 "nvmf_create_subsystem", 00:06:25.739 "nvmf_get_subsystems", 00:06:25.739 "env_dpdk_get_mem_stats", 00:06:25.739 "nbd_get_disks", 00:06:25.739 "nbd_stop_disk", 00:06:25.739 "nbd_start_disk", 00:06:25.739 "ublk_recover_disk", 00:06:25.739 "ublk_get_disks", 00:06:25.739 "ublk_stop_disk", 00:06:25.739 "ublk_start_disk", 00:06:25.739 "ublk_destroy_target", 00:06:25.739 "ublk_create_target", 00:06:25.739 "virtio_blk_create_transport", 00:06:25.739 "virtio_blk_get_transports", 00:06:25.739 "vhost_controller_set_coalescing", 00:06:25.739 "vhost_get_controllers", 00:06:25.739 "vhost_delete_controller", 00:06:25.739 "vhost_create_blk_controller", 00:06:25.739 "vhost_scsi_controller_remove_target", 00:06:25.739 "vhost_scsi_controller_add_target", 00:06:25.739 "vhost_start_scsi_controller", 00:06:25.739 "vhost_create_scsi_controller", 00:06:25.739 "thread_set_cpumask", 00:06:25.739 "framework_get_scheduler", 00:06:25.739 "framework_set_scheduler", 00:06:25.739 "framework_get_reactors", 00:06:25.739 "thread_get_io_channels", 00:06:25.739 "thread_get_pollers", 00:06:25.739 "thread_get_stats", 00:06:25.739 "framework_monitor_context_switch", 00:06:25.739 "spdk_kill_instance", 00:06:25.739 "log_enable_timestamps", 00:06:25.739 "log_get_flags", 00:06:25.739 "log_clear_flag", 00:06:25.739 "log_set_flag", 00:06:25.739 "log_get_level", 00:06:25.739 "log_set_level", 00:06:25.739 "log_get_print_level", 00:06:25.739 "log_set_print_level", 00:06:25.739 "framework_enable_cpumask_locks", 00:06:25.739 "framework_disable_cpumask_locks", 00:06:25.739 "framework_wait_init", 00:06:25.739 "framework_start_init", 00:06:25.739 "scsi_get_devices", 00:06:25.739 "bdev_get_histogram", 00:06:25.739 "bdev_enable_histogram", 00:06:25.739 "bdev_set_qos_limit", 00:06:25.739 "bdev_set_qd_sampling_period", 00:06:25.739 "bdev_get_bdevs", 00:06:25.739 "bdev_reset_iostat", 00:06:25.739 "bdev_get_iostat", 00:06:25.739 "bdev_examine", 00:06:25.739 "bdev_wait_for_examine", 00:06:25.739 "bdev_set_options", 00:06:25.739 "notify_get_notifications", 00:06:25.739 "notify_get_types", 00:06:25.739 "accel_get_stats", 00:06:25.739 "accel_set_options", 00:06:25.739 "accel_set_driver", 00:06:25.739 "accel_crypto_key_destroy", 00:06:25.739 "accel_crypto_keys_get", 00:06:25.739 "accel_crypto_key_create", 00:06:25.739 "accel_assign_opc", 00:06:25.739 "accel_get_module_info", 00:06:25.739 "accel_get_opc_assignments", 00:06:25.739 "vmd_rescan", 00:06:25.739 "vmd_remove_device", 00:06:25.739 "vmd_enable", 00:06:25.739 "sock_set_default_impl", 00:06:25.739 "sock_impl_set_options", 00:06:25.739 "sock_impl_get_options", 00:06:25.739 "iobuf_get_stats", 00:06:25.739 "iobuf_set_options", 00:06:25.739 "framework_get_pci_devices", 00:06:25.739 "framework_get_config", 00:06:25.739 "framework_get_subsystems", 00:06:25.739 "trace_get_info", 00:06:25.739 "trace_get_tpoint_group_mask", 00:06:25.739 "trace_disable_tpoint_group", 00:06:25.739 "trace_enable_tpoint_group", 00:06:25.739 "trace_clear_tpoint_mask", 00:06:25.739 "trace_set_tpoint_mask", 00:06:25.739 "spdk_get_version", 00:06:25.739 "rpc_get_methods" 00:06:25.739 ] 00:06:25.739 05:05:15 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:25.739 05:05:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:25.739 05:05:15 -- common/autotest_common.sh@10 -- # set +x 00:06:25.739 05:05:15 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:25.739 05:05:15 -- spdkcli/tcp.sh@38 -- # killprocess 66634 00:06:25.739 05:05:15 -- common/autotest_common.sh@936 -- # '[' -z 66634 ']' 00:06:25.739 05:05:15 -- common/autotest_common.sh@940 -- # kill -0 66634 00:06:25.739 05:05:15 -- common/autotest_common.sh@941 -- # uname 00:06:25.739 05:05:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:25.739 05:05:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66634 00:06:25.739 05:05:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:25.739 killing process with pid 66634 00:06:25.739 05:05:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:25.739 05:05:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66634' 00:06:25.739 05:05:15 -- common/autotest_common.sh@955 -- # kill 66634 00:06:25.739 05:05:15 -- common/autotest_common.sh@960 -- # wait 66634 00:06:25.997 ************************************ 00:06:25.997 END TEST spdkcli_tcp 00:06:25.997 ************************************ 00:06:25.997 00:06:25.997 real 0m1.808s 00:06:25.997 user 0m3.494s 00:06:25.997 sys 0m0.396s 00:06:25.997 05:05:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:25.997 05:05:15 -- common/autotest_common.sh@10 -- # set +x 00:06:25.997 05:05:15 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:25.997 05:05:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:25.997 05:05:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:25.997 05:05:15 -- common/autotest_common.sh@10 -- # set +x 00:06:25.997 ************************************ 00:06:25.997 START TEST dpdk_mem_utility 00:06:25.997 ************************************ 00:06:25.997 05:05:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:26.255 * Looking for test storage... 00:06:26.255 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:26.255 05:05:15 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:26.255 05:05:15 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:26.255 05:05:15 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:26.255 05:05:15 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:26.255 05:05:15 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:26.255 05:05:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:26.255 05:05:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:26.255 05:05:15 -- scripts/common.sh@335 -- # IFS=.-: 00:06:26.255 05:05:15 -- scripts/common.sh@335 -- # read -ra ver1 00:06:26.255 05:05:15 -- scripts/common.sh@336 -- # IFS=.-: 00:06:26.255 05:05:15 -- scripts/common.sh@336 -- # read -ra ver2 00:06:26.255 05:05:15 -- scripts/common.sh@337 -- # local 'op=<' 00:06:26.255 05:05:15 -- scripts/common.sh@339 -- # ver1_l=2 00:06:26.255 05:05:15 -- scripts/common.sh@340 -- # ver2_l=1 00:06:26.255 05:05:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:26.255 05:05:15 -- scripts/common.sh@343 -- # case "$op" in 00:06:26.255 05:05:15 -- scripts/common.sh@344 -- # : 1 00:06:26.255 05:05:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:26.255 05:05:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:26.255 05:05:15 -- scripts/common.sh@364 -- # decimal 1 00:06:26.255 05:05:15 -- scripts/common.sh@352 -- # local d=1 00:06:26.255 05:05:15 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:26.255 05:05:15 -- scripts/common.sh@354 -- # echo 1 00:06:26.255 05:05:15 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:26.255 05:05:15 -- scripts/common.sh@365 -- # decimal 2 00:06:26.255 05:05:15 -- scripts/common.sh@352 -- # local d=2 00:06:26.255 05:05:15 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:26.255 05:05:15 -- scripts/common.sh@354 -- # echo 2 00:06:26.255 05:05:15 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:26.255 05:05:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:26.255 05:05:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:26.255 05:05:15 -- scripts/common.sh@367 -- # return 0 00:06:26.255 05:05:15 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:26.255 05:05:15 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:26.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.255 --rc genhtml_branch_coverage=1 00:06:26.255 --rc genhtml_function_coverage=1 00:06:26.255 --rc genhtml_legend=1 00:06:26.255 --rc geninfo_all_blocks=1 00:06:26.255 --rc geninfo_unexecuted_blocks=1 00:06:26.255 00:06:26.255 ' 00:06:26.255 05:05:15 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:26.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.255 --rc genhtml_branch_coverage=1 00:06:26.255 --rc genhtml_function_coverage=1 00:06:26.255 --rc genhtml_legend=1 00:06:26.255 --rc geninfo_all_blocks=1 00:06:26.255 --rc geninfo_unexecuted_blocks=1 00:06:26.255 00:06:26.255 ' 00:06:26.255 05:05:15 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:26.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.255 --rc genhtml_branch_coverage=1 00:06:26.255 --rc genhtml_function_coverage=1 00:06:26.255 --rc genhtml_legend=1 00:06:26.255 --rc geninfo_all_blocks=1 00:06:26.255 --rc geninfo_unexecuted_blocks=1 00:06:26.255 00:06:26.255 ' 00:06:26.255 05:05:15 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:26.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.255 --rc genhtml_branch_coverage=1 00:06:26.255 --rc genhtml_function_coverage=1 00:06:26.255 --rc genhtml_legend=1 00:06:26.255 --rc geninfo_all_blocks=1 00:06:26.255 --rc geninfo_unexecuted_blocks=1 00:06:26.255 00:06:26.255 ' 00:06:26.255 05:05:15 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:26.255 05:05:15 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=66732 00:06:26.255 05:05:15 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 66732 00:06:26.255 05:05:15 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:26.255 05:05:15 -- common/autotest_common.sh@829 -- # '[' -z 66732 ']' 00:06:26.255 05:05:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.255 05:05:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:26.255 05:05:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.255 05:05:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:26.255 05:05:15 -- common/autotest_common.sh@10 -- # set +x 00:06:26.255 [2024-12-08 05:05:15.997306] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:26.255 [2024-12-08 05:05:15.997601] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66732 ] 00:06:26.513 [2024-12-08 05:05:16.136624] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.513 [2024-12-08 05:05:16.171510] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:26.513 [2024-12-08 05:05:16.171953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.453 05:05:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:27.453 05:05:17 -- common/autotest_common.sh@862 -- # return 0 00:06:27.453 05:05:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:27.453 05:05:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:27.453 05:05:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.453 05:05:17 -- common/autotest_common.sh@10 -- # set +x 00:06:27.453 { 00:06:27.453 "filename": "/tmp/spdk_mem_dump.txt" 00:06:27.453 } 00:06:27.453 05:05:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.453 05:05:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:27.453 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:27.453 1 heaps totaling size 814.000000 MiB 00:06:27.453 size: 814.000000 MiB heap id: 0 00:06:27.453 end heaps---------- 00:06:27.453 8 mempools totaling size 598.116089 MiB 00:06:27.453 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:27.453 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:27.453 size: 84.521057 MiB name: bdev_io_66732 00:06:27.453 size: 51.011292 MiB name: evtpool_66732 00:06:27.453 size: 50.003479 MiB name: msgpool_66732 00:06:27.453 size: 21.763794 MiB name: PDU_Pool 00:06:27.453 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:27.453 size: 0.026123 MiB name: Session_Pool 00:06:27.453 end mempools------- 00:06:27.453 6 memzones totaling size 4.142822 MiB 00:06:27.453 size: 1.000366 MiB name: RG_ring_0_66732 00:06:27.453 size: 1.000366 MiB name: RG_ring_1_66732 00:06:27.453 size: 1.000366 MiB name: RG_ring_4_66732 00:06:27.453 size: 1.000366 MiB name: RG_ring_5_66732 00:06:27.453 size: 0.125366 MiB name: RG_ring_2_66732 00:06:27.453 size: 0.015991 MiB name: RG_ring_3_66732 00:06:27.453 end memzones------- 00:06:27.453 05:05:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:27.453 heap id: 0 total size: 814.000000 MiB number of busy elements: 300 number of free elements: 15 00:06:27.453 list of free elements. size: 12.471924 MiB 00:06:27.453 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:27.453 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:27.453 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:27.454 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:27.454 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:27.454 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:27.454 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:27.454 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:27.454 element at address: 0x200000200000 with size: 0.832825 MiB 00:06:27.454 element at address: 0x20001aa00000 with size: 0.569702 MiB 00:06:27.454 element at address: 0x20000b200000 with size: 0.488892 MiB 00:06:27.454 element at address: 0x200000800000 with size: 0.486145 MiB 00:06:27.454 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:27.454 element at address: 0x200027e00000 with size: 0.395752 MiB 00:06:27.454 element at address: 0x200003a00000 with size: 0.347839 MiB 00:06:27.454 list of standard malloc elements. size: 199.265503 MiB 00:06:27.454 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:27.454 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:27.454 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:27.454 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:27.454 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:27.454 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:27.454 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:27.454 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:27.454 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:27.454 element at address: 0x2000002d5340 with size: 0.000183 MiB 00:06:27.454 element at address: 0x2000002d5400 with size: 0.000183 MiB 00:06:27.454 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:06:27.454 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:06:27.454 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:06:27.454 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:06:27.454 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:06:27.454 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:06:27.454 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:06:27.454 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:06:27.454 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:06:27.454 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:06:27.454 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:06:27.454 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:06:27.454 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:06:27.454 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:06:27.454 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:06:27.454 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:06:27.454 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:06:27.454 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:06:27.454 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:06:27.454 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:06:27.454 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:06:27.454 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:06:27.454 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:06:27.454 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:06:27.454 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:06:27.454 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:06:27.454 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:06:27.454 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:06:27.454 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:06:27.454 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:06:27.454 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:06:27.454 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:06:27.454 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:06:27.454 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:06:27.454 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:06:27.454 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:06:27.454 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:06:27.454 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:06:27.454 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:06:27.454 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:06:27.454 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:06:27.454 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:06:27.454 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:06:27.454 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:06:27.454 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:06:27.454 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:06:27.454 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:06:27.454 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:06:27.454 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:06:27.454 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:27.454 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:27.454 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:27.454 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:27.454 element at address: 0x20000087c740 with size: 0.000183 MiB 00:06:27.454 element at address: 0x20000087c800 with size: 0.000183 MiB 00:06:27.454 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:06:27.454 element at address: 0x20000087c980 with size: 0.000183 MiB 00:06:27.454 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:06:27.454 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:06:27.454 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:06:27.454 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:06:27.454 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:06:27.454 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:27.454 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:27.454 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:27.454 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:06:27.454 element at address: 0x200003a59180 with size: 0.000183 MiB 00:06:27.454 element at address: 0x200003a59240 with size: 0.000183 MiB 00:06:27.454 element at address: 0x200003a59300 with size: 0.000183 MiB 00:06:27.454 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:06:27.454 element at address: 0x200003a59480 with size: 0.000183 MiB 00:06:27.454 element at address: 0x200003a59540 with size: 0.000183 MiB 00:06:27.454 element at address: 0x200003a59600 with size: 0.000183 MiB 00:06:27.454 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:06:27.454 element at address: 0x200003a59780 with size: 0.000183 MiB 00:06:27.454 element at address: 0x200003a59840 with size: 0.000183 MiB 00:06:27.454 element at address: 0x200003a59900 with size: 0.000183 MiB 00:06:27.454 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:06:27.454 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:06:27.454 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:06:27.454 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:06:27.454 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:06:27.454 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:06:27.454 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:06:27.454 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:06:27.454 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:06:27.454 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:06:27.454 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:06:27.454 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:06:27.454 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:06:27.454 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:06:27.454 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:06:27.454 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:06:27.454 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:06:27.454 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:06:27.454 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:06:27.454 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:06:27.454 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:06:27.454 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:06:27.454 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:06:27.454 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:06:27.454 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:06:27.454 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:06:27.454 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:06:27.454 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:06:27.454 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:06:27.454 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:06:27.454 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:27.454 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:27.454 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:27.454 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:27.454 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:27.454 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:27.454 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:27.454 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:27.454 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:06:27.454 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:06:27.454 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:06:27.454 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:06:27.454 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:06:27.454 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:06:27.454 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:06:27.454 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:06:27.454 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:06:27.454 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:06:27.454 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:27.455 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:27.455 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:27.455 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:27.455 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:27.455 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e65500 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:06:27.455 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:06:27.456 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:06:27.456 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:06:27.456 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:06:27.456 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:06:27.456 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:06:27.456 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:06:27.456 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:06:27.456 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:06:27.456 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:06:27.456 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:27.456 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:27.456 list of memzone associated elements. size: 602.262573 MiB 00:06:27.456 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:27.456 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:27.456 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:27.456 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:27.456 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:27.456 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_66732_0 00:06:27.456 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:27.456 associated memzone info: size: 48.002930 MiB name: MP_evtpool_66732_0 00:06:27.456 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:27.456 associated memzone info: size: 48.002930 MiB name: MP_msgpool_66732_0 00:06:27.456 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:27.456 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:27.456 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:27.456 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:27.456 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:27.456 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_66732 00:06:27.456 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:27.456 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_66732 00:06:27.456 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:27.456 associated memzone info: size: 1.007996 MiB name: MP_evtpool_66732 00:06:27.456 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:27.456 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:27.456 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:27.456 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:27.456 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:27.456 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:27.456 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:27.456 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:27.456 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:27.456 associated memzone info: size: 1.000366 MiB name: RG_ring_0_66732 00:06:27.456 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:27.456 associated memzone info: size: 1.000366 MiB name: RG_ring_1_66732 00:06:27.456 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:27.456 associated memzone info: size: 1.000366 MiB name: RG_ring_4_66732 00:06:27.456 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:27.456 associated memzone info: size: 1.000366 MiB name: RG_ring_5_66732 00:06:27.456 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:27.456 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_66732 00:06:27.456 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:27.456 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:27.456 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:27.456 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:27.456 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:27.456 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:27.456 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:27.456 associated memzone info: size: 0.125366 MiB name: RG_ring_2_66732 00:06:27.456 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:27.456 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:27.456 element at address: 0x200027e65680 with size: 0.023743 MiB 00:06:27.456 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:27.456 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:27.456 associated memzone info: size: 0.015991 MiB name: RG_ring_3_66732 00:06:27.456 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:06:27.456 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:27.456 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:06:27.456 associated memzone info: size: 0.000183 MiB name: MP_msgpool_66732 00:06:27.456 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:27.456 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_66732 00:06:27.456 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:06:27.456 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:27.456 05:05:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:27.456 05:05:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 66732 00:06:27.456 05:05:17 -- common/autotest_common.sh@936 -- # '[' -z 66732 ']' 00:06:27.456 05:05:17 -- common/autotest_common.sh@940 -- # kill -0 66732 00:06:27.456 05:05:17 -- common/autotest_common.sh@941 -- # uname 00:06:27.456 05:05:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:27.456 05:05:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66732 00:06:27.456 killing process with pid 66732 00:06:27.456 05:05:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:27.456 05:05:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:27.456 05:05:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66732' 00:06:27.456 05:05:17 -- common/autotest_common.sh@955 -- # kill 66732 00:06:27.456 05:05:17 -- common/autotest_common.sh@960 -- # wait 66732 00:06:27.718 00:06:27.718 real 0m1.681s 00:06:27.718 user 0m1.965s 00:06:27.718 sys 0m0.341s 00:06:27.718 05:05:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:27.718 ************************************ 00:06:27.718 END TEST dpdk_mem_utility 00:06:27.718 ************************************ 00:06:27.718 05:05:17 -- common/autotest_common.sh@10 -- # set +x 00:06:27.718 05:05:17 -- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:27.718 05:05:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:27.718 05:05:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:27.718 05:05:17 -- common/autotest_common.sh@10 -- # set +x 00:06:27.718 ************************************ 00:06:27.718 START TEST event 00:06:27.718 ************************************ 00:06:27.718 05:05:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:27.976 * Looking for test storage... 00:06:27.976 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:27.976 05:05:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:27.976 05:05:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:27.976 05:05:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:27.976 05:05:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:27.976 05:05:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:27.976 05:05:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:27.976 05:05:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:27.976 05:05:17 -- scripts/common.sh@335 -- # IFS=.-: 00:06:27.976 05:05:17 -- scripts/common.sh@335 -- # read -ra ver1 00:06:27.976 05:05:17 -- scripts/common.sh@336 -- # IFS=.-: 00:06:27.976 05:05:17 -- scripts/common.sh@336 -- # read -ra ver2 00:06:27.976 05:05:17 -- scripts/common.sh@337 -- # local 'op=<' 00:06:27.976 05:05:17 -- scripts/common.sh@339 -- # ver1_l=2 00:06:27.976 05:05:17 -- scripts/common.sh@340 -- # ver2_l=1 00:06:27.976 05:05:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:27.976 05:05:17 -- scripts/common.sh@343 -- # case "$op" in 00:06:27.976 05:05:17 -- scripts/common.sh@344 -- # : 1 00:06:27.976 05:05:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:27.976 05:05:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:27.976 05:05:17 -- scripts/common.sh@364 -- # decimal 1 00:06:27.976 05:05:17 -- scripts/common.sh@352 -- # local d=1 00:06:27.976 05:05:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:27.976 05:05:17 -- scripts/common.sh@354 -- # echo 1 00:06:27.976 05:05:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:27.976 05:05:17 -- scripts/common.sh@365 -- # decimal 2 00:06:27.976 05:05:17 -- scripts/common.sh@352 -- # local d=2 00:06:27.976 05:05:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:27.976 05:05:17 -- scripts/common.sh@354 -- # echo 2 00:06:27.976 05:05:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:27.976 05:05:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:27.976 05:05:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:27.976 05:05:17 -- scripts/common.sh@367 -- # return 0 00:06:27.976 05:05:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:27.976 05:05:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:27.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.976 --rc genhtml_branch_coverage=1 00:06:27.976 --rc genhtml_function_coverage=1 00:06:27.976 --rc genhtml_legend=1 00:06:27.976 --rc geninfo_all_blocks=1 00:06:27.976 --rc geninfo_unexecuted_blocks=1 00:06:27.976 00:06:27.976 ' 00:06:27.976 05:05:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:27.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.976 --rc genhtml_branch_coverage=1 00:06:27.976 --rc genhtml_function_coverage=1 00:06:27.976 --rc genhtml_legend=1 00:06:27.976 --rc geninfo_all_blocks=1 00:06:27.976 --rc geninfo_unexecuted_blocks=1 00:06:27.976 00:06:27.976 ' 00:06:27.976 05:05:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:27.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.976 --rc genhtml_branch_coverage=1 00:06:27.976 --rc genhtml_function_coverage=1 00:06:27.976 --rc genhtml_legend=1 00:06:27.976 --rc geninfo_all_blocks=1 00:06:27.976 --rc geninfo_unexecuted_blocks=1 00:06:27.976 00:06:27.976 ' 00:06:27.976 05:05:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:27.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.976 --rc genhtml_branch_coverage=1 00:06:27.976 --rc genhtml_function_coverage=1 00:06:27.976 --rc genhtml_legend=1 00:06:27.976 --rc geninfo_all_blocks=1 00:06:27.976 --rc geninfo_unexecuted_blocks=1 00:06:27.976 00:06:27.976 ' 00:06:27.976 05:05:17 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:27.976 05:05:17 -- bdev/nbd_common.sh@6 -- # set -e 00:06:27.976 05:05:17 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:27.976 05:05:17 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:27.976 05:05:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:27.976 05:05:17 -- common/autotest_common.sh@10 -- # set +x 00:06:27.976 ************************************ 00:06:27.976 START TEST event_perf 00:06:27.976 ************************************ 00:06:27.976 05:05:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:27.976 Running I/O for 1 seconds...[2024-12-08 05:05:17.700106] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:27.976 [2024-12-08 05:05:17.700339] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66816 ] 00:06:28.235 [2024-12-08 05:05:17.837351] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:28.235 [2024-12-08 05:05:17.872856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.235 [2024-12-08 05:05:17.873045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:28.235 [2024-12-08 05:05:17.873154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:28.235 [2024-12-08 05:05:17.873155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.170 Running I/O for 1 seconds... 00:06:29.170 lcore 0: 190518 00:06:29.170 lcore 1: 190516 00:06:29.170 lcore 2: 190515 00:06:29.170 lcore 3: 190517 00:06:29.170 done. 00:06:29.170 ************************************ 00:06:29.170 END TEST event_perf 00:06:29.170 ************************************ 00:06:29.170 00:06:29.170 real 0m1.241s 00:06:29.170 user 0m4.069s 00:06:29.170 sys 0m0.050s 00:06:29.170 05:05:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:29.170 05:05:18 -- common/autotest_common.sh@10 -- # set +x 00:06:29.450 05:05:18 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:29.450 05:05:18 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:29.450 05:05:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:29.450 05:05:18 -- common/autotest_common.sh@10 -- # set +x 00:06:29.450 ************************************ 00:06:29.450 START TEST event_reactor 00:06:29.450 ************************************ 00:06:29.450 05:05:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:29.450 [2024-12-08 05:05:18.994181] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:29.450 [2024-12-08 05:05:18.994430] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66849 ] 00:06:29.450 [2024-12-08 05:05:19.128112] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.450 [2024-12-08 05:05:19.167640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.832 test_start 00:06:30.832 oneshot 00:06:30.832 tick 100 00:06:30.832 tick 100 00:06:30.832 tick 250 00:06:30.832 tick 100 00:06:30.832 tick 100 00:06:30.832 tick 100 00:06:30.832 tick 250 00:06:30.832 tick 500 00:06:30.832 tick 100 00:06:30.832 tick 100 00:06:30.832 tick 250 00:06:30.832 tick 100 00:06:30.832 tick 100 00:06:30.832 test_end 00:06:30.832 ************************************ 00:06:30.832 END TEST event_reactor 00:06:30.832 ************************************ 00:06:30.832 00:06:30.832 real 0m1.242s 00:06:30.832 user 0m1.098s 00:06:30.832 sys 0m0.039s 00:06:30.832 05:05:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:30.832 05:05:20 -- common/autotest_common.sh@10 -- # set +x 00:06:30.832 05:05:20 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:30.832 05:05:20 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:30.832 05:05:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:30.832 05:05:20 -- common/autotest_common.sh@10 -- # set +x 00:06:30.832 ************************************ 00:06:30.832 START TEST event_reactor_perf 00:06:30.832 ************************************ 00:06:30.832 05:05:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:30.832 [2024-12-08 05:05:20.287203] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:30.832 [2024-12-08 05:05:20.287437] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66879 ] 00:06:30.832 [2024-12-08 05:05:20.423707] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.832 [2024-12-08 05:05:20.464492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.770 test_start 00:06:31.770 test_end 00:06:31.770 Performance: 411345 events per second 00:06:31.770 ************************************ 00:06:31.770 END TEST event_reactor_perf 00:06:31.770 ************************************ 00:06:31.770 00:06:31.770 real 0m1.269s 00:06:31.770 user 0m1.117s 00:06:31.770 sys 0m0.045s 00:06:31.770 05:05:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:31.770 05:05:21 -- common/autotest_common.sh@10 -- # set +x 00:06:32.029 05:05:21 -- event/event.sh@49 -- # uname -s 00:06:32.029 05:05:21 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:32.029 05:05:21 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:32.029 05:05:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:32.029 05:05:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:32.029 05:05:21 -- common/autotest_common.sh@10 -- # set +x 00:06:32.029 ************************************ 00:06:32.029 START TEST event_scheduler 00:06:32.029 ************************************ 00:06:32.029 05:05:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:32.029 * Looking for test storage... 00:06:32.029 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:32.029 05:05:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:32.029 05:05:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:32.030 05:05:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:32.030 05:05:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:32.030 05:05:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:32.030 05:05:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:32.030 05:05:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:32.030 05:05:21 -- scripts/common.sh@335 -- # IFS=.-: 00:06:32.030 05:05:21 -- scripts/common.sh@335 -- # read -ra ver1 00:06:32.030 05:05:21 -- scripts/common.sh@336 -- # IFS=.-: 00:06:32.030 05:05:21 -- scripts/common.sh@336 -- # read -ra ver2 00:06:32.030 05:05:21 -- scripts/common.sh@337 -- # local 'op=<' 00:06:32.030 05:05:21 -- scripts/common.sh@339 -- # ver1_l=2 00:06:32.030 05:05:21 -- scripts/common.sh@340 -- # ver2_l=1 00:06:32.030 05:05:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:32.030 05:05:21 -- scripts/common.sh@343 -- # case "$op" in 00:06:32.030 05:05:21 -- scripts/common.sh@344 -- # : 1 00:06:32.030 05:05:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:32.030 05:05:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:32.030 05:05:21 -- scripts/common.sh@364 -- # decimal 1 00:06:32.030 05:05:21 -- scripts/common.sh@352 -- # local d=1 00:06:32.030 05:05:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:32.030 05:05:21 -- scripts/common.sh@354 -- # echo 1 00:06:32.030 05:05:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:32.030 05:05:21 -- scripts/common.sh@365 -- # decimal 2 00:06:32.030 05:05:21 -- scripts/common.sh@352 -- # local d=2 00:06:32.030 05:05:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:32.030 05:05:21 -- scripts/common.sh@354 -- # echo 2 00:06:32.030 05:05:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:32.030 05:05:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:32.030 05:05:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:32.030 05:05:21 -- scripts/common.sh@367 -- # return 0 00:06:32.030 05:05:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:32.030 05:05:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:32.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.030 --rc genhtml_branch_coverage=1 00:06:32.030 --rc genhtml_function_coverage=1 00:06:32.030 --rc genhtml_legend=1 00:06:32.030 --rc geninfo_all_blocks=1 00:06:32.030 --rc geninfo_unexecuted_blocks=1 00:06:32.030 00:06:32.030 ' 00:06:32.030 05:05:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:32.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.030 --rc genhtml_branch_coverage=1 00:06:32.030 --rc genhtml_function_coverage=1 00:06:32.030 --rc genhtml_legend=1 00:06:32.030 --rc geninfo_all_blocks=1 00:06:32.030 --rc geninfo_unexecuted_blocks=1 00:06:32.030 00:06:32.030 ' 00:06:32.030 05:05:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:32.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.030 --rc genhtml_branch_coverage=1 00:06:32.030 --rc genhtml_function_coverage=1 00:06:32.030 --rc genhtml_legend=1 00:06:32.030 --rc geninfo_all_blocks=1 00:06:32.030 --rc geninfo_unexecuted_blocks=1 00:06:32.030 00:06:32.030 ' 00:06:32.030 05:05:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:32.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.030 --rc genhtml_branch_coverage=1 00:06:32.030 --rc genhtml_function_coverage=1 00:06:32.030 --rc genhtml_legend=1 00:06:32.030 --rc geninfo_all_blocks=1 00:06:32.030 --rc geninfo_unexecuted_blocks=1 00:06:32.030 00:06:32.030 ' 00:06:32.030 05:05:21 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:32.030 05:05:21 -- scheduler/scheduler.sh@35 -- # scheduler_pid=66948 00:06:32.030 05:05:21 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:32.030 05:05:21 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:32.030 05:05:21 -- scheduler/scheduler.sh@37 -- # waitforlisten 66948 00:06:32.030 05:05:21 -- common/autotest_common.sh@829 -- # '[' -z 66948 ']' 00:06:32.030 05:05:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.030 05:05:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:32.030 05:05:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.030 05:05:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:32.030 05:05:21 -- common/autotest_common.sh@10 -- # set +x 00:06:32.289 [2024-12-08 05:05:21.817870] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:32.289 [2024-12-08 05:05:21.818582] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66948 ] 00:06:32.289 [2024-12-08 05:05:21.961151] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:32.289 [2024-12-08 05:05:22.006112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.289 [2024-12-08 05:05:22.006261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.289 [2024-12-08 05:05:22.007008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:32.289 [2024-12-08 05:05:22.007019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:33.224 05:05:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:33.224 05:05:22 -- common/autotest_common.sh@862 -- # return 0 00:06:33.224 05:05:22 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:33.224 05:05:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.224 05:05:22 -- common/autotest_common.sh@10 -- # set +x 00:06:33.224 POWER: Env isn't set yet! 00:06:33.224 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:33.224 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:33.224 POWER: Cannot set governor of lcore 0 to userspace 00:06:33.224 POWER: Attempting to initialise PSTAT power management... 00:06:33.224 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:33.224 POWER: Cannot set governor of lcore 0 to performance 00:06:33.224 POWER: Attempting to initialise AMD PSTATE power management... 00:06:33.224 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:33.224 POWER: Cannot set governor of lcore 0 to userspace 00:06:33.224 POWER: Attempting to initialise CPPC power management... 00:06:33.224 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:33.224 POWER: Cannot set governor of lcore 0 to userspace 00:06:33.224 POWER: Attempting to initialise VM power management... 00:06:33.224 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:33.224 POWER: Unable to set Power Management Environment for lcore 0 00:06:33.224 [2024-12-08 05:05:22.748503] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:06:33.224 [2024-12-08 05:05:22.748515] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:06:33.224 [2024-12-08 05:05:22.748523] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:06:33.224 [2024-12-08 05:05:22.748534] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:33.224 [2024-12-08 05:05:22.748541] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:33.224 [2024-12-08 05:05:22.748547] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:33.224 05:05:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.224 05:05:22 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:33.224 05:05:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.224 05:05:22 -- common/autotest_common.sh@10 -- # set +x 00:06:33.224 [2024-12-08 05:05:22.799750] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:33.224 05:05:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.224 05:05:22 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:33.224 05:05:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:33.224 05:05:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:33.224 05:05:22 -- common/autotest_common.sh@10 -- # set +x 00:06:33.224 ************************************ 00:06:33.224 START TEST scheduler_create_thread 00:06:33.224 ************************************ 00:06:33.224 05:05:22 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:06:33.224 05:05:22 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:33.224 05:05:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.224 05:05:22 -- common/autotest_common.sh@10 -- # set +x 00:06:33.224 2 00:06:33.224 05:05:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.224 05:05:22 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:33.224 05:05:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.224 05:05:22 -- common/autotest_common.sh@10 -- # set +x 00:06:33.224 3 00:06:33.224 05:05:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.224 05:05:22 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:33.224 05:05:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.224 05:05:22 -- common/autotest_common.sh@10 -- # set +x 00:06:33.224 4 00:06:33.224 05:05:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.224 05:05:22 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:33.224 05:05:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.224 05:05:22 -- common/autotest_common.sh@10 -- # set +x 00:06:33.224 5 00:06:33.224 05:05:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.224 05:05:22 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:33.224 05:05:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.224 05:05:22 -- common/autotest_common.sh@10 -- # set +x 00:06:33.224 6 00:06:33.224 05:05:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.224 05:05:22 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:33.224 05:05:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.224 05:05:22 -- common/autotest_common.sh@10 -- # set +x 00:06:33.224 7 00:06:33.224 05:05:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.224 05:05:22 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:33.224 05:05:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.224 05:05:22 -- common/autotest_common.sh@10 -- # set +x 00:06:33.224 8 00:06:33.224 05:05:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.224 05:05:22 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:33.224 05:05:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.224 05:05:22 -- common/autotest_common.sh@10 -- # set +x 00:06:33.224 9 00:06:33.224 05:05:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.224 05:05:22 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:33.224 05:05:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.224 05:05:22 -- common/autotest_common.sh@10 -- # set +x 00:06:33.224 10 00:06:33.224 05:05:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.224 05:05:22 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:33.224 05:05:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.224 05:05:22 -- common/autotest_common.sh@10 -- # set +x 00:06:33.224 05:05:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.224 05:05:22 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:33.224 05:05:22 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:33.224 05:05:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.224 05:05:22 -- common/autotest_common.sh@10 -- # set +x 00:06:33.224 05:05:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.224 05:05:22 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:33.224 05:05:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.224 05:05:22 -- common/autotest_common.sh@10 -- # set +x 00:06:34.601 05:05:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.601 05:05:24 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:34.601 05:05:24 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:34.601 05:05:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.601 05:05:24 -- common/autotest_common.sh@10 -- # set +x 00:06:35.976 ************************************ 00:06:35.976 END TEST scheduler_create_thread 00:06:35.976 ************************************ 00:06:35.976 05:05:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.976 00:06:35.976 real 0m2.613s 00:06:35.976 user 0m0.017s 00:06:35.976 sys 0m0.007s 00:06:35.976 05:05:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:35.976 05:05:25 -- common/autotest_common.sh@10 -- # set +x 00:06:35.976 05:05:25 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:35.976 05:05:25 -- scheduler/scheduler.sh@46 -- # killprocess 66948 00:06:35.977 05:05:25 -- common/autotest_common.sh@936 -- # '[' -z 66948 ']' 00:06:35.977 05:05:25 -- common/autotest_common.sh@940 -- # kill -0 66948 00:06:35.977 05:05:25 -- common/autotest_common.sh@941 -- # uname 00:06:35.977 05:05:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:35.977 05:05:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66948 00:06:35.977 killing process with pid 66948 00:06:35.977 05:05:25 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:35.977 05:05:25 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:35.977 05:05:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66948' 00:06:35.977 05:05:25 -- common/autotest_common.sh@955 -- # kill 66948 00:06:35.977 05:05:25 -- common/autotest_common.sh@960 -- # wait 66948 00:06:36.236 [2024-12-08 05:05:25.906893] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:36.495 ************************************ 00:06:36.495 END TEST event_scheduler 00:06:36.495 ************************************ 00:06:36.495 00:06:36.495 real 0m4.468s 00:06:36.495 user 0m8.489s 00:06:36.495 sys 0m0.326s 00:06:36.495 05:05:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:36.495 05:05:26 -- common/autotest_common.sh@10 -- # set +x 00:06:36.495 05:05:26 -- event/event.sh@51 -- # modprobe -n nbd 00:06:36.495 05:05:26 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:36.495 05:05:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:36.495 05:05:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:36.495 05:05:26 -- common/autotest_common.sh@10 -- # set +x 00:06:36.495 ************************************ 00:06:36.495 START TEST app_repeat 00:06:36.495 ************************************ 00:06:36.495 05:05:26 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:06:36.495 05:05:26 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.495 05:05:26 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.495 05:05:26 -- event/event.sh@13 -- # local nbd_list 00:06:36.495 05:05:26 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:36.495 05:05:26 -- event/event.sh@14 -- # local bdev_list 00:06:36.495 05:05:26 -- event/event.sh@15 -- # local repeat_times=4 00:06:36.495 05:05:26 -- event/event.sh@17 -- # modprobe nbd 00:06:36.495 Process app_repeat pid: 67047 00:06:36.495 spdk_app_start Round 0 00:06:36.495 05:05:26 -- event/event.sh@19 -- # repeat_pid=67047 00:06:36.495 05:05:26 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:36.495 05:05:26 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:36.495 05:05:26 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 67047' 00:06:36.495 05:05:26 -- event/event.sh@23 -- # for i in {0..2} 00:06:36.495 05:05:26 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:36.495 05:05:26 -- event/event.sh@25 -- # waitforlisten 67047 /var/tmp/spdk-nbd.sock 00:06:36.495 05:05:26 -- common/autotest_common.sh@829 -- # '[' -z 67047 ']' 00:06:36.495 05:05:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:36.495 05:05:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:36.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:36.495 05:05:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:36.495 05:05:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:36.495 05:05:26 -- common/autotest_common.sh@10 -- # set +x 00:06:36.495 [2024-12-08 05:05:26.141307] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:36.495 [2024-12-08 05:05:26.141552] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67047 ] 00:06:36.495 [2024-12-08 05:05:26.269326] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:36.754 [2024-12-08 05:05:26.305719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.754 [2024-12-08 05:05:26.305757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.754 05:05:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:36.754 05:05:26 -- common/autotest_common.sh@862 -- # return 0 00:06:36.754 05:05:26 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:37.013 Malloc0 00:06:37.013 05:05:26 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:37.272 Malloc1 00:06:37.272 05:05:26 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:37.272 05:05:26 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.272 05:05:26 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:37.272 05:05:26 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:37.272 05:05:26 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.272 05:05:26 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:37.272 05:05:26 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:37.272 05:05:26 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.272 05:05:26 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:37.272 05:05:26 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:37.272 05:05:26 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.272 05:05:26 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:37.272 05:05:26 -- bdev/nbd_common.sh@12 -- # local i 00:06:37.272 05:05:26 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:37.272 05:05:26 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:37.272 05:05:26 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:37.532 /dev/nbd0 00:06:37.532 05:05:27 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:37.532 05:05:27 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:37.532 05:05:27 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:37.532 05:05:27 -- common/autotest_common.sh@867 -- # local i 00:06:37.532 05:05:27 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:37.532 05:05:27 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:37.532 05:05:27 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:37.532 05:05:27 -- common/autotest_common.sh@871 -- # break 00:06:37.532 05:05:27 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:37.532 05:05:27 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:37.532 05:05:27 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:37.532 1+0 records in 00:06:37.532 1+0 records out 00:06:37.532 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268921 s, 15.2 MB/s 00:06:37.532 05:05:27 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:37.532 05:05:27 -- common/autotest_common.sh@884 -- # size=4096 00:06:37.532 05:05:27 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:37.533 05:05:27 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:37.533 05:05:27 -- common/autotest_common.sh@887 -- # return 0 00:06:37.533 05:05:27 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:37.533 05:05:27 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:37.533 05:05:27 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:37.792 /dev/nbd1 00:06:37.792 05:05:27 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:37.792 05:05:27 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:37.792 05:05:27 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:37.792 05:05:27 -- common/autotest_common.sh@867 -- # local i 00:06:37.792 05:05:27 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:37.792 05:05:27 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:37.792 05:05:27 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:37.792 05:05:27 -- common/autotest_common.sh@871 -- # break 00:06:37.792 05:05:27 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:37.792 05:05:27 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:37.792 05:05:27 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:37.792 1+0 records in 00:06:37.792 1+0 records out 00:06:37.792 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283675 s, 14.4 MB/s 00:06:37.792 05:05:27 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:37.792 05:05:27 -- common/autotest_common.sh@884 -- # size=4096 00:06:37.792 05:05:27 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:37.792 05:05:27 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:37.792 05:05:27 -- common/autotest_common.sh@887 -- # return 0 00:06:37.792 05:05:27 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:37.792 05:05:27 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:37.792 05:05:27 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:37.792 05:05:27 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.792 05:05:27 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:38.051 05:05:27 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:38.051 { 00:06:38.051 "nbd_device": "/dev/nbd0", 00:06:38.051 "bdev_name": "Malloc0" 00:06:38.051 }, 00:06:38.051 { 00:06:38.051 "nbd_device": "/dev/nbd1", 00:06:38.051 "bdev_name": "Malloc1" 00:06:38.051 } 00:06:38.051 ]' 00:06:38.051 05:05:27 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:38.051 { 00:06:38.051 "nbd_device": "/dev/nbd0", 00:06:38.051 "bdev_name": "Malloc0" 00:06:38.051 }, 00:06:38.051 { 00:06:38.051 "nbd_device": "/dev/nbd1", 00:06:38.051 "bdev_name": "Malloc1" 00:06:38.051 } 00:06:38.051 ]' 00:06:38.051 05:05:27 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:38.311 05:05:27 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:38.311 /dev/nbd1' 00:06:38.311 05:05:27 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:38.311 /dev/nbd1' 00:06:38.311 05:05:27 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:38.311 05:05:27 -- bdev/nbd_common.sh@65 -- # count=2 00:06:38.311 05:05:27 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:38.311 05:05:27 -- bdev/nbd_common.sh@95 -- # count=2 00:06:38.311 05:05:27 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:38.311 05:05:27 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:38.311 05:05:27 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.311 05:05:27 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:38.311 05:05:27 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:38.311 05:05:27 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:38.311 05:05:27 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:38.311 05:05:27 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:38.311 256+0 records in 00:06:38.311 256+0 records out 00:06:38.311 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00709961 s, 148 MB/s 00:06:38.311 05:05:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:38.311 05:05:27 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:38.311 256+0 records in 00:06:38.311 256+0 records out 00:06:38.311 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0263693 s, 39.8 MB/s 00:06:38.311 05:05:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:38.311 05:05:27 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:38.311 256+0 records in 00:06:38.311 256+0 records out 00:06:38.311 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0240881 s, 43.5 MB/s 00:06:38.311 05:05:27 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:38.311 05:05:27 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.311 05:05:27 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:38.311 05:05:27 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:38.311 05:05:27 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:38.311 05:05:27 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:38.311 05:05:27 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:38.311 05:05:27 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:38.311 05:05:27 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:38.311 05:05:27 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:38.311 05:05:27 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:38.311 05:05:27 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:38.311 05:05:27 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:38.311 05:05:27 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.311 05:05:27 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.311 05:05:27 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:38.311 05:05:27 -- bdev/nbd_common.sh@51 -- # local i 00:06:38.311 05:05:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:38.311 05:05:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:38.571 05:05:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:38.571 05:05:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:38.571 05:05:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:38.571 05:05:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:38.571 05:05:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:38.571 05:05:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:38.571 05:05:28 -- bdev/nbd_common.sh@41 -- # break 00:06:38.571 05:05:28 -- bdev/nbd_common.sh@45 -- # return 0 00:06:38.571 05:05:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:38.571 05:05:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:38.831 05:05:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:38.831 05:05:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:38.831 05:05:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:38.831 05:05:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:38.831 05:05:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:38.831 05:05:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:38.831 05:05:28 -- bdev/nbd_common.sh@41 -- # break 00:06:38.831 05:05:28 -- bdev/nbd_common.sh@45 -- # return 0 00:06:38.831 05:05:28 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:38.831 05:05:28 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.831 05:05:28 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:39.090 05:05:28 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:39.090 05:05:28 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:39.090 05:05:28 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:39.090 05:05:28 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:39.090 05:05:28 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:39.090 05:05:28 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:39.090 05:05:28 -- bdev/nbd_common.sh@65 -- # true 00:06:39.090 05:05:28 -- bdev/nbd_common.sh@65 -- # count=0 00:06:39.090 05:05:28 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:39.090 05:05:28 -- bdev/nbd_common.sh@104 -- # count=0 00:06:39.090 05:05:28 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:39.090 05:05:28 -- bdev/nbd_common.sh@109 -- # return 0 00:06:39.090 05:05:28 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:39.350 05:05:29 -- event/event.sh@35 -- # sleep 3 00:06:39.610 [2024-12-08 05:05:29.238927] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:39.610 [2024-12-08 05:05:29.270493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.610 [2024-12-08 05:05:29.270504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.610 [2024-12-08 05:05:29.300229] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:39.610 [2024-12-08 05:05:29.300280] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:42.897 spdk_app_start Round 1 00:06:42.898 05:05:32 -- event/event.sh@23 -- # for i in {0..2} 00:06:42.898 05:05:32 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:42.898 05:05:32 -- event/event.sh@25 -- # waitforlisten 67047 /var/tmp/spdk-nbd.sock 00:06:42.898 05:05:32 -- common/autotest_common.sh@829 -- # '[' -z 67047 ']' 00:06:42.898 05:05:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:42.898 05:05:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:42.898 05:05:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:42.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:42.898 05:05:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:42.898 05:05:32 -- common/autotest_common.sh@10 -- # set +x 00:06:42.898 05:05:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:42.898 05:05:32 -- common/autotest_common.sh@862 -- # return 0 00:06:42.898 05:05:32 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:42.898 Malloc0 00:06:42.898 05:05:32 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:43.156 Malloc1 00:06:43.156 05:05:32 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:43.156 05:05:32 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.156 05:05:32 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:43.156 05:05:32 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:43.156 05:05:32 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.156 05:05:32 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:43.156 05:05:32 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:43.156 05:05:32 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.156 05:05:32 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:43.156 05:05:32 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:43.156 05:05:32 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.156 05:05:32 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:43.156 05:05:32 -- bdev/nbd_common.sh@12 -- # local i 00:06:43.156 05:05:32 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:43.156 05:05:32 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:43.156 05:05:32 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:43.449 /dev/nbd0 00:06:43.449 05:05:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:43.449 05:05:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:43.449 05:05:33 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:43.449 05:05:33 -- common/autotest_common.sh@867 -- # local i 00:06:43.449 05:05:33 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:43.449 05:05:33 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:43.449 05:05:33 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:43.449 05:05:33 -- common/autotest_common.sh@871 -- # break 00:06:43.449 05:05:33 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:43.449 05:05:33 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:43.449 05:05:33 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:43.449 1+0 records in 00:06:43.449 1+0 records out 00:06:43.449 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278587 s, 14.7 MB/s 00:06:43.449 05:05:33 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:43.449 05:05:33 -- common/autotest_common.sh@884 -- # size=4096 00:06:43.449 05:05:33 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:43.449 05:05:33 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:43.449 05:05:33 -- common/autotest_common.sh@887 -- # return 0 00:06:43.449 05:05:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:43.449 05:05:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:43.449 05:05:33 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:43.707 /dev/nbd1 00:06:43.707 05:05:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:43.707 05:05:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:43.707 05:05:33 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:43.707 05:05:33 -- common/autotest_common.sh@867 -- # local i 00:06:43.707 05:05:33 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:43.707 05:05:33 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:43.707 05:05:33 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:43.707 05:05:33 -- common/autotest_common.sh@871 -- # break 00:06:43.707 05:05:33 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:43.707 05:05:33 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:43.707 05:05:33 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:43.707 1+0 records in 00:06:43.707 1+0 records out 00:06:43.707 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000216126 s, 19.0 MB/s 00:06:43.707 05:05:33 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:43.707 05:05:33 -- common/autotest_common.sh@884 -- # size=4096 00:06:43.708 05:05:33 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:43.708 05:05:33 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:43.708 05:05:33 -- common/autotest_common.sh@887 -- # return 0 00:06:43.708 05:05:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:43.708 05:05:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:43.708 05:05:33 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:43.708 05:05:33 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.708 05:05:33 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:43.966 05:05:33 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:43.966 { 00:06:43.966 "nbd_device": "/dev/nbd0", 00:06:43.966 "bdev_name": "Malloc0" 00:06:43.966 }, 00:06:43.966 { 00:06:43.966 "nbd_device": "/dev/nbd1", 00:06:43.966 "bdev_name": "Malloc1" 00:06:43.966 } 00:06:43.966 ]' 00:06:43.966 05:05:33 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:43.966 { 00:06:43.966 "nbd_device": "/dev/nbd0", 00:06:43.966 "bdev_name": "Malloc0" 00:06:43.966 }, 00:06:43.966 { 00:06:43.966 "nbd_device": "/dev/nbd1", 00:06:43.966 "bdev_name": "Malloc1" 00:06:43.966 } 00:06:43.966 ]' 00:06:43.966 05:05:33 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:44.225 05:05:33 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:44.225 /dev/nbd1' 00:06:44.225 05:05:33 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:44.225 /dev/nbd1' 00:06:44.225 05:05:33 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:44.225 05:05:33 -- bdev/nbd_common.sh@65 -- # count=2 00:06:44.225 05:05:33 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:44.225 05:05:33 -- bdev/nbd_common.sh@95 -- # count=2 00:06:44.225 05:05:33 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:44.225 05:05:33 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:44.225 05:05:33 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.225 05:05:33 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:44.225 05:05:33 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:44.225 05:05:33 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:44.225 05:05:33 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:44.225 05:05:33 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:44.225 256+0 records in 00:06:44.225 256+0 records out 00:06:44.225 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107052 s, 97.9 MB/s 00:06:44.225 05:05:33 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:44.225 05:05:33 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:44.225 256+0 records in 00:06:44.225 256+0 records out 00:06:44.225 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243923 s, 43.0 MB/s 00:06:44.225 05:05:33 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:44.225 05:05:33 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:44.225 256+0 records in 00:06:44.225 256+0 records out 00:06:44.225 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0293826 s, 35.7 MB/s 00:06:44.225 05:05:33 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:44.225 05:05:33 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.225 05:05:33 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:44.225 05:05:33 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:44.225 05:05:33 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:44.225 05:05:33 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:44.225 05:05:33 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:44.225 05:05:33 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:44.225 05:05:33 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:44.225 05:05:33 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:44.225 05:05:33 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:44.225 05:05:33 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:44.225 05:05:33 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:44.225 05:05:33 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.225 05:05:33 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.225 05:05:33 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:44.225 05:05:33 -- bdev/nbd_common.sh@51 -- # local i 00:06:44.225 05:05:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:44.225 05:05:33 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:44.483 05:05:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:44.483 05:05:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:44.483 05:05:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:44.483 05:05:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:44.483 05:05:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:44.483 05:05:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:44.483 05:05:34 -- bdev/nbd_common.sh@41 -- # break 00:06:44.483 05:05:34 -- bdev/nbd_common.sh@45 -- # return 0 00:06:44.483 05:05:34 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:44.483 05:05:34 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:44.742 05:05:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:44.742 05:05:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:44.742 05:05:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:44.742 05:05:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:44.742 05:05:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:44.742 05:05:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:44.742 05:05:34 -- bdev/nbd_common.sh@41 -- # break 00:06:44.742 05:05:34 -- bdev/nbd_common.sh@45 -- # return 0 00:06:44.742 05:05:34 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:44.742 05:05:34 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.742 05:05:34 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:45.001 05:05:34 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:45.001 05:05:34 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:45.001 05:05:34 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:45.001 05:05:34 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:45.001 05:05:34 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:45.001 05:05:34 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:45.001 05:05:34 -- bdev/nbd_common.sh@65 -- # true 00:06:45.001 05:05:34 -- bdev/nbd_common.sh@65 -- # count=0 00:06:45.001 05:05:34 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:45.001 05:05:34 -- bdev/nbd_common.sh@104 -- # count=0 00:06:45.001 05:05:34 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:45.001 05:05:34 -- bdev/nbd_common.sh@109 -- # return 0 00:06:45.001 05:05:34 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:45.261 05:05:35 -- event/event.sh@35 -- # sleep 3 00:06:45.519 [2024-12-08 05:05:35.112881] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:45.519 [2024-12-08 05:05:35.143454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.519 [2024-12-08 05:05:35.143461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.519 [2024-12-08 05:05:35.173003] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:45.519 [2024-12-08 05:05:35.173069] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:48.851 spdk_app_start Round 2 00:06:48.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:48.851 05:05:38 -- event/event.sh@23 -- # for i in {0..2} 00:06:48.851 05:05:38 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:48.851 05:05:38 -- event/event.sh@25 -- # waitforlisten 67047 /var/tmp/spdk-nbd.sock 00:06:48.851 05:05:38 -- common/autotest_common.sh@829 -- # '[' -z 67047 ']' 00:06:48.851 05:05:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:48.851 05:05:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:48.851 05:05:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:48.851 05:05:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:48.851 05:05:38 -- common/autotest_common.sh@10 -- # set +x 00:06:48.851 05:05:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:48.851 05:05:38 -- common/autotest_common.sh@862 -- # return 0 00:06:48.851 05:05:38 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:48.851 Malloc0 00:06:48.851 05:05:38 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:49.121 Malloc1 00:06:49.121 05:05:38 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:49.121 05:05:38 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.121 05:05:38 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:49.121 05:05:38 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:49.121 05:05:38 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.121 05:05:38 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:49.121 05:05:38 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:49.121 05:05:38 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.121 05:05:38 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:49.121 05:05:38 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:49.121 05:05:38 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.121 05:05:38 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:49.121 05:05:38 -- bdev/nbd_common.sh@12 -- # local i 00:06:49.121 05:05:38 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:49.121 05:05:38 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:49.121 05:05:38 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:49.381 /dev/nbd0 00:06:49.381 05:05:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:49.381 05:05:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:49.381 05:05:39 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:49.381 05:05:39 -- common/autotest_common.sh@867 -- # local i 00:06:49.381 05:05:39 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:49.381 05:05:39 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:49.381 05:05:39 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:49.381 05:05:39 -- common/autotest_common.sh@871 -- # break 00:06:49.381 05:05:39 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:49.381 05:05:39 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:49.381 05:05:39 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:49.381 1+0 records in 00:06:49.381 1+0 records out 00:06:49.381 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000221954 s, 18.5 MB/s 00:06:49.381 05:05:39 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:49.381 05:05:39 -- common/autotest_common.sh@884 -- # size=4096 00:06:49.381 05:05:39 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:49.381 05:05:39 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:49.381 05:05:39 -- common/autotest_common.sh@887 -- # return 0 00:06:49.381 05:05:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:49.381 05:05:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:49.381 05:05:39 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:49.640 /dev/nbd1 00:06:49.640 05:05:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:49.640 05:05:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:49.640 05:05:39 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:49.640 05:05:39 -- common/autotest_common.sh@867 -- # local i 00:06:49.640 05:05:39 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:49.640 05:05:39 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:49.640 05:05:39 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:49.640 05:05:39 -- common/autotest_common.sh@871 -- # break 00:06:49.640 05:05:39 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:49.640 05:05:39 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:49.640 05:05:39 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:49.640 1+0 records in 00:06:49.640 1+0 records out 00:06:49.640 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000306726 s, 13.4 MB/s 00:06:49.640 05:05:39 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:49.640 05:05:39 -- common/autotest_common.sh@884 -- # size=4096 00:06:49.640 05:05:39 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:49.640 05:05:39 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:49.640 05:05:39 -- common/autotest_common.sh@887 -- # return 0 00:06:49.640 05:05:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:49.640 05:05:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:49.640 05:05:39 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:49.640 05:05:39 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.640 05:05:39 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:50.209 05:05:39 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:50.209 { 00:06:50.209 "nbd_device": "/dev/nbd0", 00:06:50.209 "bdev_name": "Malloc0" 00:06:50.209 }, 00:06:50.209 { 00:06:50.209 "nbd_device": "/dev/nbd1", 00:06:50.209 "bdev_name": "Malloc1" 00:06:50.209 } 00:06:50.209 ]' 00:06:50.209 05:05:39 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:50.209 { 00:06:50.209 "nbd_device": "/dev/nbd0", 00:06:50.209 "bdev_name": "Malloc0" 00:06:50.209 }, 00:06:50.209 { 00:06:50.209 "nbd_device": "/dev/nbd1", 00:06:50.209 "bdev_name": "Malloc1" 00:06:50.209 } 00:06:50.209 ]' 00:06:50.209 05:05:39 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:50.209 05:05:39 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:50.209 /dev/nbd1' 00:06:50.209 05:05:39 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:50.209 /dev/nbd1' 00:06:50.209 05:05:39 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:50.209 05:05:39 -- bdev/nbd_common.sh@65 -- # count=2 00:06:50.209 05:05:39 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:50.209 05:05:39 -- bdev/nbd_common.sh@95 -- # count=2 00:06:50.209 05:05:39 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:50.209 05:05:39 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:50.209 05:05:39 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.209 05:05:39 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:50.209 05:05:39 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:50.209 05:05:39 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:50.209 05:05:39 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:50.209 05:05:39 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:50.209 256+0 records in 00:06:50.209 256+0 records out 00:06:50.209 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0109027 s, 96.2 MB/s 00:06:50.209 05:05:39 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:50.209 05:05:39 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:50.209 256+0 records in 00:06:50.209 256+0 records out 00:06:50.209 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0253155 s, 41.4 MB/s 00:06:50.209 05:05:39 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:50.209 05:05:39 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:50.209 256+0 records in 00:06:50.209 256+0 records out 00:06:50.209 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0259727 s, 40.4 MB/s 00:06:50.209 05:05:39 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:50.209 05:05:39 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.209 05:05:39 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:50.209 05:05:39 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:50.209 05:05:39 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:50.210 05:05:39 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:50.210 05:05:39 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:50.210 05:05:39 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:50.210 05:05:39 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:50.210 05:05:39 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:50.210 05:05:39 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:50.210 05:05:39 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:50.210 05:05:39 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:50.210 05:05:39 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.210 05:05:39 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.210 05:05:39 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:50.210 05:05:39 -- bdev/nbd_common.sh@51 -- # local i 00:06:50.210 05:05:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.210 05:05:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:50.469 05:05:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:50.469 05:05:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:50.469 05:05:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:50.469 05:05:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.469 05:05:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.469 05:05:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:50.469 05:05:40 -- bdev/nbd_common.sh@41 -- # break 00:06:50.469 05:05:40 -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.469 05:05:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.469 05:05:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:50.728 05:05:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:50.728 05:05:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:50.728 05:05:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:50.728 05:05:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.728 05:05:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.728 05:05:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:50.728 05:05:40 -- bdev/nbd_common.sh@41 -- # break 00:06:50.728 05:05:40 -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.728 05:05:40 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:50.728 05:05:40 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.728 05:05:40 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:50.987 05:05:40 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:50.987 05:05:40 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:50.987 05:05:40 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:50.987 05:05:40 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:50.987 05:05:40 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:50.987 05:05:40 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:51.245 05:05:40 -- bdev/nbd_common.sh@65 -- # true 00:06:51.245 05:05:40 -- bdev/nbd_common.sh@65 -- # count=0 00:06:51.245 05:05:40 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:51.245 05:05:40 -- bdev/nbd_common.sh@104 -- # count=0 00:06:51.245 05:05:40 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:51.245 05:05:40 -- bdev/nbd_common.sh@109 -- # return 0 00:06:51.245 05:05:40 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:51.504 05:05:41 -- event/event.sh@35 -- # sleep 3 00:06:51.504 [2024-12-08 05:05:41.156846] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:51.504 [2024-12-08 05:05:41.187936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.504 [2024-12-08 05:05:41.187946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.504 [2024-12-08 05:05:41.217530] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:51.504 [2024-12-08 05:05:41.217593] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:54.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:54.796 05:05:44 -- event/event.sh@38 -- # waitforlisten 67047 /var/tmp/spdk-nbd.sock 00:06:54.796 05:05:44 -- common/autotest_common.sh@829 -- # '[' -z 67047 ']' 00:06:54.796 05:05:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:54.796 05:05:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:54.796 05:05:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:54.796 05:05:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:54.796 05:05:44 -- common/autotest_common.sh@10 -- # set +x 00:06:54.796 05:05:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:54.796 05:05:44 -- common/autotest_common.sh@862 -- # return 0 00:06:54.796 05:05:44 -- event/event.sh@39 -- # killprocess 67047 00:06:54.796 05:05:44 -- common/autotest_common.sh@936 -- # '[' -z 67047 ']' 00:06:54.796 05:05:44 -- common/autotest_common.sh@940 -- # kill -0 67047 00:06:54.796 05:05:44 -- common/autotest_common.sh@941 -- # uname 00:06:54.796 05:05:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:54.796 05:05:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67047 00:06:54.796 killing process with pid 67047 00:06:54.796 05:05:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:54.796 05:05:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:54.796 05:05:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67047' 00:06:54.796 05:05:44 -- common/autotest_common.sh@955 -- # kill 67047 00:06:54.796 05:05:44 -- common/autotest_common.sh@960 -- # wait 67047 00:06:54.796 spdk_app_start is called in Round 0. 00:06:54.796 Shutdown signal received, stop current app iteration 00:06:54.796 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:06:54.796 spdk_app_start is called in Round 1. 00:06:54.796 Shutdown signal received, stop current app iteration 00:06:54.796 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:06:54.796 spdk_app_start is called in Round 2. 00:06:54.796 Shutdown signal received, stop current app iteration 00:06:54.796 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:06:54.796 spdk_app_start is called in Round 3. 00:06:54.796 Shutdown signal received, stop current app iteration 00:06:54.796 05:05:44 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:54.796 05:05:44 -- event/event.sh@42 -- # return 0 00:06:54.796 00:06:54.796 real 0m18.368s 00:06:54.796 user 0m42.175s 00:06:54.796 sys 0m2.443s 00:06:54.796 05:05:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:54.796 ************************************ 00:06:54.796 END TEST app_repeat 00:06:54.796 ************************************ 00:06:54.796 05:05:44 -- common/autotest_common.sh@10 -- # set +x 00:06:54.796 05:05:44 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:54.796 05:05:44 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:54.796 05:05:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:54.796 05:05:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:54.796 05:05:44 -- common/autotest_common.sh@10 -- # set +x 00:06:54.796 ************************************ 00:06:54.796 START TEST cpu_locks 00:06:54.796 ************************************ 00:06:54.796 05:05:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:55.055 * Looking for test storage... 00:06:55.055 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:55.055 05:05:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:55.055 05:05:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:55.055 05:05:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:55.055 05:05:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:55.055 05:05:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:55.055 05:05:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:55.055 05:05:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:55.055 05:05:44 -- scripts/common.sh@335 -- # IFS=.-: 00:06:55.055 05:05:44 -- scripts/common.sh@335 -- # read -ra ver1 00:06:55.055 05:05:44 -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.055 05:05:44 -- scripts/common.sh@336 -- # read -ra ver2 00:06:55.055 05:05:44 -- scripts/common.sh@337 -- # local 'op=<' 00:06:55.055 05:05:44 -- scripts/common.sh@339 -- # ver1_l=2 00:06:55.055 05:05:44 -- scripts/common.sh@340 -- # ver2_l=1 00:06:55.055 05:05:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:55.055 05:05:44 -- scripts/common.sh@343 -- # case "$op" in 00:06:55.055 05:05:44 -- scripts/common.sh@344 -- # : 1 00:06:55.055 05:05:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:55.055 05:05:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.055 05:05:44 -- scripts/common.sh@364 -- # decimal 1 00:06:55.055 05:05:44 -- scripts/common.sh@352 -- # local d=1 00:06:55.055 05:05:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.055 05:05:44 -- scripts/common.sh@354 -- # echo 1 00:06:55.055 05:05:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:55.055 05:05:44 -- scripts/common.sh@365 -- # decimal 2 00:06:55.055 05:05:44 -- scripts/common.sh@352 -- # local d=2 00:06:55.055 05:05:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.055 05:05:44 -- scripts/common.sh@354 -- # echo 2 00:06:55.055 05:05:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:55.055 05:05:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:55.055 05:05:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:55.055 05:05:44 -- scripts/common.sh@367 -- # return 0 00:06:55.055 05:05:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.055 05:05:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:55.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.055 --rc genhtml_branch_coverage=1 00:06:55.055 --rc genhtml_function_coverage=1 00:06:55.055 --rc genhtml_legend=1 00:06:55.055 --rc geninfo_all_blocks=1 00:06:55.055 --rc geninfo_unexecuted_blocks=1 00:06:55.055 00:06:55.055 ' 00:06:55.055 05:05:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:55.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.055 --rc genhtml_branch_coverage=1 00:06:55.055 --rc genhtml_function_coverage=1 00:06:55.055 --rc genhtml_legend=1 00:06:55.055 --rc geninfo_all_blocks=1 00:06:55.055 --rc geninfo_unexecuted_blocks=1 00:06:55.055 00:06:55.055 ' 00:06:55.055 05:05:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:55.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.055 --rc genhtml_branch_coverage=1 00:06:55.055 --rc genhtml_function_coverage=1 00:06:55.056 --rc genhtml_legend=1 00:06:55.056 --rc geninfo_all_blocks=1 00:06:55.056 --rc geninfo_unexecuted_blocks=1 00:06:55.056 00:06:55.056 ' 00:06:55.056 05:05:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:55.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.056 --rc genhtml_branch_coverage=1 00:06:55.056 --rc genhtml_function_coverage=1 00:06:55.056 --rc genhtml_legend=1 00:06:55.056 --rc geninfo_all_blocks=1 00:06:55.056 --rc geninfo_unexecuted_blocks=1 00:06:55.056 00:06:55.056 ' 00:06:55.056 05:05:44 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:55.056 05:05:44 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:55.056 05:05:44 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:55.056 05:05:44 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:55.056 05:05:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:55.056 05:05:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:55.056 05:05:44 -- common/autotest_common.sh@10 -- # set +x 00:06:55.056 ************************************ 00:06:55.056 START TEST default_locks 00:06:55.056 ************************************ 00:06:55.056 05:05:44 -- common/autotest_common.sh@1114 -- # default_locks 00:06:55.056 05:05:44 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=67478 00:06:55.056 05:05:44 -- event/cpu_locks.sh@47 -- # waitforlisten 67478 00:06:55.056 05:05:44 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:55.056 05:05:44 -- common/autotest_common.sh@829 -- # '[' -z 67478 ']' 00:06:55.056 05:05:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.056 05:05:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:55.056 05:05:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.056 05:05:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:55.056 05:05:44 -- common/autotest_common.sh@10 -- # set +x 00:06:55.056 [2024-12-08 05:05:44.760692] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:55.056 [2024-12-08 05:05:44.760990] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67478 ] 00:06:55.314 [2024-12-08 05:05:44.890859] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.314 [2024-12-08 05:05:44.923272] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:55.314 [2024-12-08 05:05:44.923424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.249 05:05:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:56.249 05:05:45 -- common/autotest_common.sh@862 -- # return 0 00:06:56.249 05:05:45 -- event/cpu_locks.sh@49 -- # locks_exist 67478 00:06:56.249 05:05:45 -- event/cpu_locks.sh@22 -- # lslocks -p 67478 00:06:56.249 05:05:45 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:56.249 05:05:46 -- event/cpu_locks.sh@50 -- # killprocess 67478 00:06:56.249 05:05:46 -- common/autotest_common.sh@936 -- # '[' -z 67478 ']' 00:06:56.249 05:05:46 -- common/autotest_common.sh@940 -- # kill -0 67478 00:06:56.249 05:05:46 -- common/autotest_common.sh@941 -- # uname 00:06:56.249 05:05:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:56.249 05:05:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67478 00:06:56.508 killing process with pid 67478 00:06:56.508 05:05:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:56.508 05:05:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:56.508 05:05:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67478' 00:06:56.508 05:05:46 -- common/autotest_common.sh@955 -- # kill 67478 00:06:56.508 05:05:46 -- common/autotest_common.sh@960 -- # wait 67478 00:06:56.508 05:05:46 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 67478 00:06:56.508 05:05:46 -- common/autotest_common.sh@650 -- # local es=0 00:06:56.508 05:05:46 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 67478 00:06:56.508 05:05:46 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:56.508 05:05:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:56.508 05:05:46 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:56.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.508 ERROR: process (pid: 67478) is no longer running 00:06:56.508 05:05:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:56.508 05:05:46 -- common/autotest_common.sh@653 -- # waitforlisten 67478 00:06:56.508 05:05:46 -- common/autotest_common.sh@829 -- # '[' -z 67478 ']' 00:06:56.508 05:05:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.508 05:05:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:56.508 05:05:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.508 05:05:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:56.508 05:05:46 -- common/autotest_common.sh@10 -- # set +x 00:06:56.508 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (67478) - No such process 00:06:56.508 05:05:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:56.508 05:05:46 -- common/autotest_common.sh@862 -- # return 1 00:06:56.508 05:05:46 -- common/autotest_common.sh@653 -- # es=1 00:06:56.508 05:05:46 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:56.508 05:05:46 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:56.508 05:05:46 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:56.508 05:05:46 -- event/cpu_locks.sh@54 -- # no_locks 00:06:56.508 05:05:46 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:56.508 05:05:46 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:56.508 05:05:46 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:56.508 00:06:56.508 real 0m1.578s 00:06:56.508 user 0m1.768s 00:06:56.508 sys 0m0.383s 00:06:56.508 05:05:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:56.508 ************************************ 00:06:56.508 END TEST default_locks 00:06:56.508 ************************************ 00:06:56.508 05:05:46 -- common/autotest_common.sh@10 -- # set +x 00:06:56.767 05:05:46 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:56.767 05:05:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:56.767 05:05:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:56.767 05:05:46 -- common/autotest_common.sh@10 -- # set +x 00:06:56.767 ************************************ 00:06:56.767 START TEST default_locks_via_rpc 00:06:56.767 ************************************ 00:06:56.767 05:05:46 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:06:56.767 05:05:46 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=67526 00:06:56.767 05:05:46 -- event/cpu_locks.sh@63 -- # waitforlisten 67526 00:06:56.767 05:05:46 -- common/autotest_common.sh@829 -- # '[' -z 67526 ']' 00:06:56.767 05:05:46 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:56.767 05:05:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.767 05:05:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:56.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.767 05:05:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.767 05:05:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:56.767 05:05:46 -- common/autotest_common.sh@10 -- # set +x 00:06:56.767 [2024-12-08 05:05:46.402345] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:56.767 [2024-12-08 05:05:46.402442] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67526 ] 00:06:56.767 [2024-12-08 05:05:46.544487] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.027 [2024-12-08 05:05:46.578511] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:57.027 [2024-12-08 05:05:46.578946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.597 05:05:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:57.597 05:05:47 -- common/autotest_common.sh@862 -- # return 0 00:06:57.597 05:05:47 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:57.597 05:05:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.597 05:05:47 -- common/autotest_common.sh@10 -- # set +x 00:06:57.597 05:05:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.597 05:05:47 -- event/cpu_locks.sh@67 -- # no_locks 00:06:57.597 05:05:47 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:57.597 05:05:47 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:57.597 05:05:47 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:57.597 05:05:47 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:57.597 05:05:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.597 05:05:47 -- common/autotest_common.sh@10 -- # set +x 00:06:57.597 05:05:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.597 05:05:47 -- event/cpu_locks.sh@71 -- # locks_exist 67526 00:06:57.597 05:05:47 -- event/cpu_locks.sh@22 -- # lslocks -p 67526 00:06:57.597 05:05:47 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:58.165 05:05:47 -- event/cpu_locks.sh@73 -- # killprocess 67526 00:06:58.165 05:05:47 -- common/autotest_common.sh@936 -- # '[' -z 67526 ']' 00:06:58.165 05:05:47 -- common/autotest_common.sh@940 -- # kill -0 67526 00:06:58.165 05:05:47 -- common/autotest_common.sh@941 -- # uname 00:06:58.165 05:05:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:58.165 05:05:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67526 00:06:58.165 killing process with pid 67526 00:06:58.165 05:05:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:58.165 05:05:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:58.165 05:05:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67526' 00:06:58.165 05:05:47 -- common/autotest_common.sh@955 -- # kill 67526 00:06:58.165 05:05:47 -- common/autotest_common.sh@960 -- # wait 67526 00:06:58.424 00:06:58.424 real 0m1.713s 00:06:58.424 user 0m1.955s 00:06:58.424 sys 0m0.436s 00:06:58.424 05:05:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:58.424 ************************************ 00:06:58.424 END TEST default_locks_via_rpc 00:06:58.424 ************************************ 00:06:58.424 05:05:48 -- common/autotest_common.sh@10 -- # set +x 00:06:58.424 05:05:48 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:58.424 05:05:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:58.424 05:05:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:58.424 05:05:48 -- common/autotest_common.sh@10 -- # set +x 00:06:58.424 ************************************ 00:06:58.424 START TEST non_locking_app_on_locked_coremask 00:06:58.424 ************************************ 00:06:58.424 05:05:48 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:06:58.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.424 05:05:48 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=67571 00:06:58.424 05:05:48 -- event/cpu_locks.sh@81 -- # waitforlisten 67571 /var/tmp/spdk.sock 00:06:58.424 05:05:48 -- common/autotest_common.sh@829 -- # '[' -z 67571 ']' 00:06:58.424 05:05:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.424 05:05:48 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:58.424 05:05:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:58.424 05:05:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.424 05:05:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:58.424 05:05:48 -- common/autotest_common.sh@10 -- # set +x 00:06:58.424 [2024-12-08 05:05:48.161071] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:58.424 [2024-12-08 05:05:48.161163] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67571 ] 00:06:58.683 [2024-12-08 05:05:48.296711] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.683 [2024-12-08 05:05:48.331091] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:58.683 [2024-12-08 05:05:48.331520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.622 05:05:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:59.622 05:05:49 -- common/autotest_common.sh@862 -- # return 0 00:06:59.622 05:05:49 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=67587 00:06:59.622 05:05:49 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:59.622 05:05:49 -- event/cpu_locks.sh@85 -- # waitforlisten 67587 /var/tmp/spdk2.sock 00:06:59.622 05:05:49 -- common/autotest_common.sh@829 -- # '[' -z 67587 ']' 00:06:59.622 05:05:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:59.622 05:05:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:59.622 05:05:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:59.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:59.622 05:05:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:59.622 05:05:49 -- common/autotest_common.sh@10 -- # set +x 00:06:59.622 [2024-12-08 05:05:49.211326] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:59.622 [2024-12-08 05:05:49.211430] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67587 ] 00:06:59.622 [2024-12-08 05:05:49.356106] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:59.622 [2024-12-08 05:05:49.356156] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.881 [2024-12-08 05:05:49.419512] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:59.881 [2024-12-08 05:05:49.419720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.449 05:05:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:00.449 05:05:50 -- common/autotest_common.sh@862 -- # return 0 00:07:00.449 05:05:50 -- event/cpu_locks.sh@87 -- # locks_exist 67571 00:07:00.449 05:05:50 -- event/cpu_locks.sh@22 -- # lslocks -p 67571 00:07:00.449 05:05:50 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:01.384 05:05:50 -- event/cpu_locks.sh@89 -- # killprocess 67571 00:07:01.384 05:05:50 -- common/autotest_common.sh@936 -- # '[' -z 67571 ']' 00:07:01.384 05:05:50 -- common/autotest_common.sh@940 -- # kill -0 67571 00:07:01.384 05:05:50 -- common/autotest_common.sh@941 -- # uname 00:07:01.384 05:05:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:01.384 05:05:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67571 00:07:01.384 05:05:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:01.384 05:05:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:01.384 05:05:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67571' 00:07:01.384 killing process with pid 67571 00:07:01.384 05:05:50 -- common/autotest_common.sh@955 -- # kill 67571 00:07:01.384 05:05:50 -- common/autotest_common.sh@960 -- # wait 67571 00:07:01.642 05:05:51 -- event/cpu_locks.sh@90 -- # killprocess 67587 00:07:01.642 05:05:51 -- common/autotest_common.sh@936 -- # '[' -z 67587 ']' 00:07:01.642 05:05:51 -- common/autotest_common.sh@940 -- # kill -0 67587 00:07:01.642 05:05:51 -- common/autotest_common.sh@941 -- # uname 00:07:01.642 05:05:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:01.642 05:05:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67587 00:07:01.642 killing process with pid 67587 00:07:01.642 05:05:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:01.642 05:05:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:01.642 05:05:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67587' 00:07:01.642 05:05:51 -- common/autotest_common.sh@955 -- # kill 67587 00:07:01.642 05:05:51 -- common/autotest_common.sh@960 -- # wait 67587 00:07:01.901 00:07:01.901 real 0m3.484s 00:07:01.901 user 0m4.164s 00:07:01.901 sys 0m0.828s 00:07:01.901 ************************************ 00:07:01.901 END TEST non_locking_app_on_locked_coremask 00:07:01.901 ************************************ 00:07:01.901 05:05:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:01.901 05:05:51 -- common/autotest_common.sh@10 -- # set +x 00:07:01.901 05:05:51 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:01.901 05:05:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:01.901 05:05:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:01.901 05:05:51 -- common/autotest_common.sh@10 -- # set +x 00:07:01.901 ************************************ 00:07:01.901 START TEST locking_app_on_unlocked_coremask 00:07:01.901 ************************************ 00:07:01.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.901 05:05:51 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:07:01.901 05:05:51 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=67649 00:07:01.901 05:05:51 -- event/cpu_locks.sh@99 -- # waitforlisten 67649 /var/tmp/spdk.sock 00:07:01.901 05:05:51 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:01.901 05:05:51 -- common/autotest_common.sh@829 -- # '[' -z 67649 ']' 00:07:01.901 05:05:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.901 05:05:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:01.901 05:05:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.901 05:05:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:01.901 05:05:51 -- common/autotest_common.sh@10 -- # set +x 00:07:02.161 [2024-12-08 05:05:51.699334] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:02.161 [2024-12-08 05:05:51.700227] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67649 ] 00:07:02.161 [2024-12-08 05:05:51.839973] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:02.161 [2024-12-08 05:05:51.840007] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.161 [2024-12-08 05:05:51.874532] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:02.161 [2024-12-08 05:05:51.874702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:03.098 05:05:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:03.098 05:05:52 -- common/autotest_common.sh@862 -- # return 0 00:07:03.098 05:05:52 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=67665 00:07:03.098 05:05:52 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:03.098 05:05:52 -- event/cpu_locks.sh@103 -- # waitforlisten 67665 /var/tmp/spdk2.sock 00:07:03.098 05:05:52 -- common/autotest_common.sh@829 -- # '[' -z 67665 ']' 00:07:03.098 05:05:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:03.098 05:05:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:03.098 05:05:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:03.098 05:05:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:03.098 05:05:52 -- common/autotest_common.sh@10 -- # set +x 00:07:03.098 [2024-12-08 05:05:52.768943] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:03.098 [2024-12-08 05:05:52.769179] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67665 ] 00:07:03.357 [2024-12-08 05:05:52.906405] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.357 [2024-12-08 05:05:52.973764] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:03.357 [2024-12-08 05:05:52.973914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.294 05:05:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:04.294 05:05:53 -- common/autotest_common.sh@862 -- # return 0 00:07:04.294 05:05:53 -- event/cpu_locks.sh@105 -- # locks_exist 67665 00:07:04.294 05:05:53 -- event/cpu_locks.sh@22 -- # lslocks -p 67665 00:07:04.294 05:05:53 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:04.861 05:05:54 -- event/cpu_locks.sh@107 -- # killprocess 67649 00:07:04.862 05:05:54 -- common/autotest_common.sh@936 -- # '[' -z 67649 ']' 00:07:04.862 05:05:54 -- common/autotest_common.sh@940 -- # kill -0 67649 00:07:04.862 05:05:54 -- common/autotest_common.sh@941 -- # uname 00:07:04.862 05:05:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:04.862 05:05:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67649 00:07:04.862 killing process with pid 67649 00:07:04.862 05:05:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:04.862 05:05:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:04.862 05:05:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67649' 00:07:04.862 05:05:54 -- common/autotest_common.sh@955 -- # kill 67649 00:07:04.862 05:05:54 -- common/autotest_common.sh@960 -- # wait 67649 00:07:05.121 05:05:54 -- event/cpu_locks.sh@108 -- # killprocess 67665 00:07:05.121 05:05:54 -- common/autotest_common.sh@936 -- # '[' -z 67665 ']' 00:07:05.121 05:05:54 -- common/autotest_common.sh@940 -- # kill -0 67665 00:07:05.121 05:05:54 -- common/autotest_common.sh@941 -- # uname 00:07:05.121 05:05:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:05.121 05:05:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67665 00:07:05.121 killing process with pid 67665 00:07:05.121 05:05:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:05.121 05:05:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:05.121 05:05:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67665' 00:07:05.121 05:05:54 -- common/autotest_common.sh@955 -- # kill 67665 00:07:05.121 05:05:54 -- common/autotest_common.sh@960 -- # wait 67665 00:07:05.379 00:07:05.379 real 0m3.482s 00:07:05.379 user 0m4.207s 00:07:05.379 sys 0m0.791s 00:07:05.379 05:05:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:05.379 05:05:55 -- common/autotest_common.sh@10 -- # set +x 00:07:05.379 ************************************ 00:07:05.379 END TEST locking_app_on_unlocked_coremask 00:07:05.379 ************************************ 00:07:05.637 05:05:55 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:05.637 05:05:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:05.637 05:05:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:05.637 05:05:55 -- common/autotest_common.sh@10 -- # set +x 00:07:05.637 ************************************ 00:07:05.637 START TEST locking_app_on_locked_coremask 00:07:05.637 ************************************ 00:07:05.637 05:05:55 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:07:05.637 05:05:55 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=67725 00:07:05.637 05:05:55 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:05.637 05:05:55 -- event/cpu_locks.sh@116 -- # waitforlisten 67725 /var/tmp/spdk.sock 00:07:05.637 05:05:55 -- common/autotest_common.sh@829 -- # '[' -z 67725 ']' 00:07:05.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.637 05:05:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.637 05:05:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:05.637 05:05:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.637 05:05:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:05.637 05:05:55 -- common/autotest_common.sh@10 -- # set +x 00:07:05.637 [2024-12-08 05:05:55.230127] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:05.637 [2024-12-08 05:05:55.230213] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67725 ] 00:07:05.637 [2024-12-08 05:05:55.368608] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.637 [2024-12-08 05:05:55.403362] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:05.637 [2024-12-08 05:05:55.403512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.571 05:05:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:06.571 05:05:56 -- common/autotest_common.sh@862 -- # return 0 00:07:06.571 05:05:56 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:06.571 05:05:56 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=67741 00:07:06.571 05:05:56 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 67741 /var/tmp/spdk2.sock 00:07:06.571 05:05:56 -- common/autotest_common.sh@650 -- # local es=0 00:07:06.571 05:05:56 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 67741 /var/tmp/spdk2.sock 00:07:06.571 05:05:56 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:06.571 05:05:56 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.571 05:05:56 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:06.571 05:05:56 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.571 05:05:56 -- common/autotest_common.sh@653 -- # waitforlisten 67741 /var/tmp/spdk2.sock 00:07:06.571 05:05:56 -- common/autotest_common.sh@829 -- # '[' -z 67741 ']' 00:07:06.571 05:05:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:06.571 05:05:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:06.571 05:05:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:06.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:06.571 05:05:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:06.571 05:05:56 -- common/autotest_common.sh@10 -- # set +x 00:07:06.571 [2024-12-08 05:05:56.267486] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:06.571 [2024-12-08 05:05:56.267563] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67741 ] 00:07:06.830 [2024-12-08 05:05:56.403279] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 67725 has claimed it. 00:07:06.830 [2024-12-08 05:05:56.403342] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:07.398 ERROR: process (pid: 67741) is no longer running 00:07:07.398 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (67741) - No such process 00:07:07.398 05:05:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:07.398 05:05:56 -- common/autotest_common.sh@862 -- # return 1 00:07:07.398 05:05:56 -- common/autotest_common.sh@653 -- # es=1 00:07:07.398 05:05:56 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:07.398 05:05:56 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:07.398 05:05:56 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:07.398 05:05:56 -- event/cpu_locks.sh@122 -- # locks_exist 67725 00:07:07.398 05:05:56 -- event/cpu_locks.sh@22 -- # lslocks -p 67725 00:07:07.398 05:05:56 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:07.658 05:05:57 -- event/cpu_locks.sh@124 -- # killprocess 67725 00:07:07.658 05:05:57 -- common/autotest_common.sh@936 -- # '[' -z 67725 ']' 00:07:07.658 05:05:57 -- common/autotest_common.sh@940 -- # kill -0 67725 00:07:07.658 05:05:57 -- common/autotest_common.sh@941 -- # uname 00:07:07.658 05:05:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:07.658 05:05:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67725 00:07:07.658 05:05:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:07.658 killing process with pid 67725 00:07:07.658 05:05:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:07.658 05:05:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67725' 00:07:07.658 05:05:57 -- common/autotest_common.sh@955 -- # kill 67725 00:07:07.658 05:05:57 -- common/autotest_common.sh@960 -- # wait 67725 00:07:07.919 00:07:07.919 real 0m2.453s 00:07:07.919 user 0m2.995s 00:07:07.919 sys 0m0.463s 00:07:07.919 05:05:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:07.919 05:05:57 -- common/autotest_common.sh@10 -- # set +x 00:07:07.919 ************************************ 00:07:07.919 END TEST locking_app_on_locked_coremask 00:07:07.919 ************************************ 00:07:07.919 05:05:57 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:07.919 05:05:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:07.919 05:05:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:07.919 05:05:57 -- common/autotest_common.sh@10 -- # set +x 00:07:07.919 ************************************ 00:07:07.919 START TEST locking_overlapped_coremask 00:07:07.919 ************************************ 00:07:07.919 05:05:57 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:07:07.919 05:05:57 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=67788 00:07:07.919 05:05:57 -- event/cpu_locks.sh@133 -- # waitforlisten 67788 /var/tmp/spdk.sock 00:07:07.919 05:05:57 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:07.919 05:05:57 -- common/autotest_common.sh@829 -- # '[' -z 67788 ']' 00:07:07.919 05:05:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.919 05:05:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:07.919 05:05:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.919 05:05:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:07.919 05:05:57 -- common/autotest_common.sh@10 -- # set +x 00:07:08.178 [2024-12-08 05:05:57.733520] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:08.178 [2024-12-08 05:05:57.733601] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67788 ] 00:07:08.178 [2024-12-08 05:05:57.860875] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:08.178 [2024-12-08 05:05:57.895154] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:08.178 [2024-12-08 05:05:57.895505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.178 [2024-12-08 05:05:57.895656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:08.179 [2024-12-08 05:05:57.895659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.117 05:05:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:09.117 05:05:58 -- common/autotest_common.sh@862 -- # return 0 00:07:09.117 05:05:58 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=67806 00:07:09.117 05:05:58 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 67806 /var/tmp/spdk2.sock 00:07:09.117 05:05:58 -- common/autotest_common.sh@650 -- # local es=0 00:07:09.117 05:05:58 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 67806 /var/tmp/spdk2.sock 00:07:09.117 05:05:58 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:09.117 05:05:58 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:09.117 05:05:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:09.117 05:05:58 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:09.117 05:05:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:09.117 05:05:58 -- common/autotest_common.sh@653 -- # waitforlisten 67806 /var/tmp/spdk2.sock 00:07:09.117 05:05:58 -- common/autotest_common.sh@829 -- # '[' -z 67806 ']' 00:07:09.117 05:05:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:09.117 05:05:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:09.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:09.117 05:05:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:09.117 05:05:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:09.117 05:05:58 -- common/autotest_common.sh@10 -- # set +x 00:07:09.117 [2024-12-08 05:05:58.764426] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:09.117 [2024-12-08 05:05:58.765325] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67806 ] 00:07:09.375 [2024-12-08 05:05:58.910002] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 67788 has claimed it. 00:07:09.375 [2024-12-08 05:05:58.910085] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:09.944 ERROR: process (pid: 67806) is no longer running 00:07:09.944 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (67806) - No such process 00:07:09.944 05:05:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:09.944 05:05:59 -- common/autotest_common.sh@862 -- # return 1 00:07:09.944 05:05:59 -- common/autotest_common.sh@653 -- # es=1 00:07:09.944 05:05:59 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:09.944 05:05:59 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:09.944 05:05:59 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:09.944 05:05:59 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:09.944 05:05:59 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:09.944 05:05:59 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:09.944 05:05:59 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:09.944 05:05:59 -- event/cpu_locks.sh@141 -- # killprocess 67788 00:07:09.944 05:05:59 -- common/autotest_common.sh@936 -- # '[' -z 67788 ']' 00:07:09.944 05:05:59 -- common/autotest_common.sh@940 -- # kill -0 67788 00:07:09.944 05:05:59 -- common/autotest_common.sh@941 -- # uname 00:07:09.944 05:05:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:09.944 05:05:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67788 00:07:09.944 05:05:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:09.944 05:05:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:09.944 05:05:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67788' 00:07:09.944 killing process with pid 67788 00:07:09.944 05:05:59 -- common/autotest_common.sh@955 -- # kill 67788 00:07:09.944 05:05:59 -- common/autotest_common.sh@960 -- # wait 67788 00:07:10.204 00:07:10.204 real 0m2.072s 00:07:10.204 user 0m6.093s 00:07:10.204 sys 0m0.321s 00:07:10.204 05:05:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:10.204 05:05:59 -- common/autotest_common.sh@10 -- # set +x 00:07:10.204 ************************************ 00:07:10.204 END TEST locking_overlapped_coremask 00:07:10.204 ************************************ 00:07:10.204 05:05:59 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:10.204 05:05:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:10.204 05:05:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:10.204 05:05:59 -- common/autotest_common.sh@10 -- # set +x 00:07:10.204 ************************************ 00:07:10.204 START TEST locking_overlapped_coremask_via_rpc 00:07:10.204 ************************************ 00:07:10.204 05:05:59 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:07:10.204 05:05:59 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=67846 00:07:10.204 05:05:59 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:10.204 05:05:59 -- event/cpu_locks.sh@149 -- # waitforlisten 67846 /var/tmp/spdk.sock 00:07:10.204 05:05:59 -- common/autotest_common.sh@829 -- # '[' -z 67846 ']' 00:07:10.204 05:05:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.204 05:05:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:10.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.204 05:05:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.204 05:05:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:10.204 05:05:59 -- common/autotest_common.sh@10 -- # set +x 00:07:10.204 [2024-12-08 05:05:59.857854] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:10.204 [2024-12-08 05:05:59.857961] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67846 ] 00:07:10.464 [2024-12-08 05:05:59.993328] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:10.464 [2024-12-08 05:05:59.993394] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:10.464 [2024-12-08 05:06:00.030122] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:10.464 [2024-12-08 05:06:00.030440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.464 [2024-12-08 05:06:00.030546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:10.464 [2024-12-08 05:06:00.030551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.403 05:06:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:11.403 05:06:00 -- common/autotest_common.sh@862 -- # return 0 00:07:11.403 05:06:00 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=67864 00:07:11.403 05:06:00 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:11.403 05:06:00 -- event/cpu_locks.sh@153 -- # waitforlisten 67864 /var/tmp/spdk2.sock 00:07:11.403 05:06:00 -- common/autotest_common.sh@829 -- # '[' -z 67864 ']' 00:07:11.403 05:06:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:11.403 05:06:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:11.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:11.403 05:06:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:11.403 05:06:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:11.403 05:06:00 -- common/autotest_common.sh@10 -- # set +x 00:07:11.403 [2024-12-08 05:06:00.941135] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:11.403 [2024-12-08 05:06:00.941239] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67864 ] 00:07:11.403 [2024-12-08 05:06:01.087950] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:11.403 [2024-12-08 05:06:01.088004] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:11.403 [2024-12-08 05:06:01.161423] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:11.403 [2024-12-08 05:06:01.162036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:11.403 [2024-12-08 05:06:01.162084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:07:11.403 [2024-12-08 05:06:01.162086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:12.340 05:06:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:12.340 05:06:01 -- common/autotest_common.sh@862 -- # return 0 00:07:12.340 05:06:01 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:12.340 05:06:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.340 05:06:01 -- common/autotest_common.sh@10 -- # set +x 00:07:12.340 05:06:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.340 05:06:01 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:12.340 05:06:01 -- common/autotest_common.sh@650 -- # local es=0 00:07:12.340 05:06:01 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:12.340 05:06:01 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:12.340 05:06:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:12.340 05:06:01 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:12.340 05:06:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:12.340 05:06:01 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:12.340 05:06:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.340 05:06:01 -- common/autotest_common.sh@10 -- # set +x 00:07:12.340 [2024-12-08 05:06:01.929880] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 67846 has claimed it. 00:07:12.340 request: 00:07:12.340 { 00:07:12.340 "method": "framework_enable_cpumask_locks", 00:07:12.340 "req_id": 1 00:07:12.340 } 00:07:12.340 Got JSON-RPC error response 00:07:12.340 response: 00:07:12.340 { 00:07:12.340 "code": -32603, 00:07:12.340 "message": "Failed to claim CPU core: 2" 00:07:12.340 } 00:07:12.340 05:06:01 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:12.340 05:06:01 -- common/autotest_common.sh@653 -- # es=1 00:07:12.340 05:06:01 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:12.340 05:06:01 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:12.340 05:06:01 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:12.340 05:06:01 -- event/cpu_locks.sh@158 -- # waitforlisten 67846 /var/tmp/spdk.sock 00:07:12.340 05:06:01 -- common/autotest_common.sh@829 -- # '[' -z 67846 ']' 00:07:12.340 05:06:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.340 05:06:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:12.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.341 05:06:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.341 05:06:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:12.341 05:06:01 -- common/autotest_common.sh@10 -- # set +x 00:07:12.600 05:06:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:12.600 05:06:02 -- common/autotest_common.sh@862 -- # return 0 00:07:12.600 05:06:02 -- event/cpu_locks.sh@159 -- # waitforlisten 67864 /var/tmp/spdk2.sock 00:07:12.600 05:06:02 -- common/autotest_common.sh@829 -- # '[' -z 67864 ']' 00:07:12.600 05:06:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:12.600 05:06:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:12.600 05:06:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:12.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:12.600 05:06:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:12.600 05:06:02 -- common/autotest_common.sh@10 -- # set +x 00:07:12.860 05:06:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:12.860 05:06:02 -- common/autotest_common.sh@862 -- # return 0 00:07:12.860 05:06:02 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:12.860 05:06:02 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:12.860 05:06:02 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:12.860 05:06:02 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:12.860 00:07:12.860 real 0m2.674s 00:07:12.860 user 0m1.433s 00:07:12.860 sys 0m0.182s 00:07:12.860 05:06:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:12.860 05:06:02 -- common/autotest_common.sh@10 -- # set +x 00:07:12.860 ************************************ 00:07:12.860 END TEST locking_overlapped_coremask_via_rpc 00:07:12.860 ************************************ 00:07:12.860 05:06:02 -- event/cpu_locks.sh@174 -- # cleanup 00:07:12.860 05:06:02 -- event/cpu_locks.sh@15 -- # [[ -z 67846 ]] 00:07:12.860 05:06:02 -- event/cpu_locks.sh@15 -- # killprocess 67846 00:07:12.860 05:06:02 -- common/autotest_common.sh@936 -- # '[' -z 67846 ']' 00:07:12.860 05:06:02 -- common/autotest_common.sh@940 -- # kill -0 67846 00:07:12.860 05:06:02 -- common/autotest_common.sh@941 -- # uname 00:07:12.860 05:06:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:12.860 05:06:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67846 00:07:12.860 killing process with pid 67846 00:07:12.860 05:06:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:12.860 05:06:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:12.860 05:06:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67846' 00:07:12.860 05:06:02 -- common/autotest_common.sh@955 -- # kill 67846 00:07:12.860 05:06:02 -- common/autotest_common.sh@960 -- # wait 67846 00:07:13.119 05:06:02 -- event/cpu_locks.sh@16 -- # [[ -z 67864 ]] 00:07:13.119 05:06:02 -- event/cpu_locks.sh@16 -- # killprocess 67864 00:07:13.119 05:06:02 -- common/autotest_common.sh@936 -- # '[' -z 67864 ']' 00:07:13.119 05:06:02 -- common/autotest_common.sh@940 -- # kill -0 67864 00:07:13.119 05:06:02 -- common/autotest_common.sh@941 -- # uname 00:07:13.119 05:06:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:13.119 05:06:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67864 00:07:13.119 killing process with pid 67864 00:07:13.119 05:06:02 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:07:13.120 05:06:02 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:07:13.120 05:06:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67864' 00:07:13.120 05:06:02 -- common/autotest_common.sh@955 -- # kill 67864 00:07:13.120 05:06:02 -- common/autotest_common.sh@960 -- # wait 67864 00:07:13.379 05:06:03 -- event/cpu_locks.sh@18 -- # rm -f 00:07:13.379 05:06:03 -- event/cpu_locks.sh@1 -- # cleanup 00:07:13.379 05:06:03 -- event/cpu_locks.sh@15 -- # [[ -z 67846 ]] 00:07:13.379 05:06:03 -- event/cpu_locks.sh@15 -- # killprocess 67846 00:07:13.379 05:06:03 -- common/autotest_common.sh@936 -- # '[' -z 67846 ']' 00:07:13.379 05:06:03 -- common/autotest_common.sh@940 -- # kill -0 67846 00:07:13.379 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (67846) - No such process 00:07:13.379 Process with pid 67846 is not found 00:07:13.379 05:06:03 -- common/autotest_common.sh@963 -- # echo 'Process with pid 67846 is not found' 00:07:13.379 05:06:03 -- event/cpu_locks.sh@16 -- # [[ -z 67864 ]] 00:07:13.379 05:06:03 -- event/cpu_locks.sh@16 -- # killprocess 67864 00:07:13.379 05:06:03 -- common/autotest_common.sh@936 -- # '[' -z 67864 ']' 00:07:13.379 05:06:03 -- common/autotest_common.sh@940 -- # kill -0 67864 00:07:13.379 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (67864) - No such process 00:07:13.379 Process with pid 67864 is not found 00:07:13.379 05:06:03 -- common/autotest_common.sh@963 -- # echo 'Process with pid 67864 is not found' 00:07:13.379 05:06:03 -- event/cpu_locks.sh@18 -- # rm -f 00:07:13.379 00:07:13.379 real 0m18.545s 00:07:13.379 user 0m34.785s 00:07:13.379 sys 0m4.064s 00:07:13.379 05:06:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:13.379 05:06:03 -- common/autotest_common.sh@10 -- # set +x 00:07:13.379 ************************************ 00:07:13.379 END TEST cpu_locks 00:07:13.379 ************************************ 00:07:13.379 00:07:13.379 real 0m45.632s 00:07:13.379 user 1m31.955s 00:07:13.379 sys 0m7.221s 00:07:13.379 05:06:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:13.379 05:06:03 -- common/autotest_common.sh@10 -- # set +x 00:07:13.379 ************************************ 00:07:13.379 END TEST event 00:07:13.379 ************************************ 00:07:13.379 05:06:03 -- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:13.379 05:06:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:13.379 05:06:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:13.379 05:06:03 -- common/autotest_common.sh@10 -- # set +x 00:07:13.638 ************************************ 00:07:13.638 START TEST thread 00:07:13.638 ************************************ 00:07:13.638 05:06:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:13.638 * Looking for test storage... 00:07:13.638 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:13.638 05:06:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:13.638 05:06:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:13.638 05:06:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:13.638 05:06:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:13.638 05:06:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:13.638 05:06:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:13.638 05:06:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:13.638 05:06:03 -- scripts/common.sh@335 -- # IFS=.-: 00:07:13.638 05:06:03 -- scripts/common.sh@335 -- # read -ra ver1 00:07:13.638 05:06:03 -- scripts/common.sh@336 -- # IFS=.-: 00:07:13.638 05:06:03 -- scripts/common.sh@336 -- # read -ra ver2 00:07:13.638 05:06:03 -- scripts/common.sh@337 -- # local 'op=<' 00:07:13.638 05:06:03 -- scripts/common.sh@339 -- # ver1_l=2 00:07:13.638 05:06:03 -- scripts/common.sh@340 -- # ver2_l=1 00:07:13.638 05:06:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:13.638 05:06:03 -- scripts/common.sh@343 -- # case "$op" in 00:07:13.638 05:06:03 -- scripts/common.sh@344 -- # : 1 00:07:13.638 05:06:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:13.638 05:06:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:13.638 05:06:03 -- scripts/common.sh@364 -- # decimal 1 00:07:13.638 05:06:03 -- scripts/common.sh@352 -- # local d=1 00:07:13.638 05:06:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:13.638 05:06:03 -- scripts/common.sh@354 -- # echo 1 00:07:13.638 05:06:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:13.638 05:06:03 -- scripts/common.sh@365 -- # decimal 2 00:07:13.638 05:06:03 -- scripts/common.sh@352 -- # local d=2 00:07:13.638 05:06:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:13.638 05:06:03 -- scripts/common.sh@354 -- # echo 2 00:07:13.638 05:06:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:13.638 05:06:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:13.638 05:06:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:13.638 05:06:03 -- scripts/common.sh@367 -- # return 0 00:07:13.638 05:06:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:13.638 05:06:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:13.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.638 --rc genhtml_branch_coverage=1 00:07:13.638 --rc genhtml_function_coverage=1 00:07:13.638 --rc genhtml_legend=1 00:07:13.638 --rc geninfo_all_blocks=1 00:07:13.638 --rc geninfo_unexecuted_blocks=1 00:07:13.638 00:07:13.638 ' 00:07:13.638 05:06:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:13.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.638 --rc genhtml_branch_coverage=1 00:07:13.638 --rc genhtml_function_coverage=1 00:07:13.638 --rc genhtml_legend=1 00:07:13.638 --rc geninfo_all_blocks=1 00:07:13.638 --rc geninfo_unexecuted_blocks=1 00:07:13.638 00:07:13.638 ' 00:07:13.638 05:06:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:13.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.638 --rc genhtml_branch_coverage=1 00:07:13.638 --rc genhtml_function_coverage=1 00:07:13.638 --rc genhtml_legend=1 00:07:13.638 --rc geninfo_all_blocks=1 00:07:13.638 --rc geninfo_unexecuted_blocks=1 00:07:13.638 00:07:13.638 ' 00:07:13.638 05:06:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:13.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.638 --rc genhtml_branch_coverage=1 00:07:13.638 --rc genhtml_function_coverage=1 00:07:13.638 --rc genhtml_legend=1 00:07:13.638 --rc geninfo_all_blocks=1 00:07:13.638 --rc geninfo_unexecuted_blocks=1 00:07:13.638 00:07:13.638 ' 00:07:13.638 05:06:03 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:13.638 05:06:03 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:13.638 05:06:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:13.638 05:06:03 -- common/autotest_common.sh@10 -- # set +x 00:07:13.638 ************************************ 00:07:13.638 START TEST thread_poller_perf 00:07:13.638 ************************************ 00:07:13.638 05:06:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:13.638 [2024-12-08 05:06:03.361622] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:13.638 [2024-12-08 05:06:03.361763] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67990 ] 00:07:13.906 [2024-12-08 05:06:03.502277] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.906 [2024-12-08 05:06:03.544482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.906 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:14.950 [2024-12-08T05:06:04.736Z] ====================================== 00:07:14.950 [2024-12-08T05:06:04.737Z] busy:2209820984 (cyc) 00:07:14.951 [2024-12-08T05:06:04.737Z] total_run_count: 321000 00:07:14.951 [2024-12-08T05:06:04.737Z] tsc_hz: 2200000000 (cyc) 00:07:14.951 [2024-12-08T05:06:04.737Z] ====================================== 00:07:14.951 [2024-12-08T05:06:04.737Z] poller_cost: 6884 (cyc), 3129 (nsec) 00:07:14.951 00:07:14.951 real 0m1.265s 00:07:14.951 user 0m1.110s 00:07:14.951 sys 0m0.046s 00:07:14.951 05:06:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:14.951 ************************************ 00:07:14.951 END TEST thread_poller_perf 00:07:14.951 ************************************ 00:07:14.951 05:06:04 -- common/autotest_common.sh@10 -- # set +x 00:07:14.951 05:06:04 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:14.951 05:06:04 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:14.951 05:06:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:14.951 05:06:04 -- common/autotest_common.sh@10 -- # set +x 00:07:14.951 ************************************ 00:07:14.951 START TEST thread_poller_perf 00:07:14.951 ************************************ 00:07:14.951 05:06:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:14.951 [2024-12-08 05:06:04.681178] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:14.951 [2024-12-08 05:06:04.681270] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68032 ] 00:07:15.208 [2024-12-08 05:06:04.818265] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.208 [2024-12-08 05:06:04.859300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.208 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:16.139 [2024-12-08T05:06:05.925Z] ====================================== 00:07:16.139 [2024-12-08T05:06:05.925Z] busy:2202442214 (cyc) 00:07:16.139 [2024-12-08T05:06:05.925Z] total_run_count: 4551000 00:07:16.139 [2024-12-08T05:06:05.925Z] tsc_hz: 2200000000 (cyc) 00:07:16.139 [2024-12-08T05:06:05.925Z] ====================================== 00:07:16.139 [2024-12-08T05:06:05.925Z] poller_cost: 483 (cyc), 219 (nsec) 00:07:16.139 00:07:16.139 real 0m1.248s 00:07:16.139 user 0m1.091s 00:07:16.139 sys 0m0.049s 00:07:16.139 05:06:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:16.139 ************************************ 00:07:16.139 END TEST thread_poller_perf 00:07:16.139 05:06:05 -- common/autotest_common.sh@10 -- # set +x 00:07:16.139 ************************************ 00:07:16.397 05:06:05 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:16.397 00:07:16.397 real 0m2.786s 00:07:16.397 user 0m2.334s 00:07:16.397 sys 0m0.236s 00:07:16.397 05:06:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:16.397 ************************************ 00:07:16.397 END TEST thread 00:07:16.397 ************************************ 00:07:16.397 05:06:05 -- common/autotest_common.sh@10 -- # set +x 00:07:16.397 05:06:05 -- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:16.397 05:06:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:16.397 05:06:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:16.397 05:06:05 -- common/autotest_common.sh@10 -- # set +x 00:07:16.397 ************************************ 00:07:16.397 START TEST accel 00:07:16.397 ************************************ 00:07:16.397 05:06:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:16.397 * Looking for test storage... 00:07:16.397 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:16.397 05:06:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:16.397 05:06:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:16.397 05:06:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:16.397 05:06:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:16.397 05:06:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:16.397 05:06:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:16.397 05:06:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:16.397 05:06:06 -- scripts/common.sh@335 -- # IFS=.-: 00:07:16.397 05:06:06 -- scripts/common.sh@335 -- # read -ra ver1 00:07:16.397 05:06:06 -- scripts/common.sh@336 -- # IFS=.-: 00:07:16.397 05:06:06 -- scripts/common.sh@336 -- # read -ra ver2 00:07:16.397 05:06:06 -- scripts/common.sh@337 -- # local 'op=<' 00:07:16.397 05:06:06 -- scripts/common.sh@339 -- # ver1_l=2 00:07:16.397 05:06:06 -- scripts/common.sh@340 -- # ver2_l=1 00:07:16.397 05:06:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:16.397 05:06:06 -- scripts/common.sh@343 -- # case "$op" in 00:07:16.397 05:06:06 -- scripts/common.sh@344 -- # : 1 00:07:16.397 05:06:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:16.397 05:06:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:16.397 05:06:06 -- scripts/common.sh@364 -- # decimal 1 00:07:16.397 05:06:06 -- scripts/common.sh@352 -- # local d=1 00:07:16.397 05:06:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:16.397 05:06:06 -- scripts/common.sh@354 -- # echo 1 00:07:16.397 05:06:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:16.654 05:06:06 -- scripts/common.sh@365 -- # decimal 2 00:07:16.654 05:06:06 -- scripts/common.sh@352 -- # local d=2 00:07:16.654 05:06:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:16.654 05:06:06 -- scripts/common.sh@354 -- # echo 2 00:07:16.654 05:06:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:16.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.654 05:06:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:16.654 05:06:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:16.654 05:06:06 -- scripts/common.sh@367 -- # return 0 00:07:16.654 05:06:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:16.654 05:06:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:16.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.654 --rc genhtml_branch_coverage=1 00:07:16.654 --rc genhtml_function_coverage=1 00:07:16.654 --rc genhtml_legend=1 00:07:16.654 --rc geninfo_all_blocks=1 00:07:16.654 --rc geninfo_unexecuted_blocks=1 00:07:16.654 00:07:16.654 ' 00:07:16.654 05:06:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:16.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.654 --rc genhtml_branch_coverage=1 00:07:16.654 --rc genhtml_function_coverage=1 00:07:16.655 --rc genhtml_legend=1 00:07:16.655 --rc geninfo_all_blocks=1 00:07:16.655 --rc geninfo_unexecuted_blocks=1 00:07:16.655 00:07:16.655 ' 00:07:16.655 05:06:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:16.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.655 --rc genhtml_branch_coverage=1 00:07:16.655 --rc genhtml_function_coverage=1 00:07:16.655 --rc genhtml_legend=1 00:07:16.655 --rc geninfo_all_blocks=1 00:07:16.655 --rc geninfo_unexecuted_blocks=1 00:07:16.655 00:07:16.655 ' 00:07:16.655 05:06:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:16.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.655 --rc genhtml_branch_coverage=1 00:07:16.655 --rc genhtml_function_coverage=1 00:07:16.655 --rc genhtml_legend=1 00:07:16.655 --rc geninfo_all_blocks=1 00:07:16.655 --rc geninfo_unexecuted_blocks=1 00:07:16.655 00:07:16.655 ' 00:07:16.655 05:06:06 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:07:16.655 05:06:06 -- accel/accel.sh@74 -- # get_expected_opcs 00:07:16.655 05:06:06 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:16.655 05:06:06 -- accel/accel.sh@59 -- # spdk_tgt_pid=68108 00:07:16.655 05:06:06 -- accel/accel.sh@60 -- # waitforlisten 68108 00:07:16.655 05:06:06 -- common/autotest_common.sh@829 -- # '[' -z 68108 ']' 00:07:16.655 05:06:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.655 05:06:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:16.655 05:06:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.655 05:06:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:16.655 05:06:06 -- common/autotest_common.sh@10 -- # set +x 00:07:16.655 05:06:06 -- accel/accel.sh@58 -- # build_accel_config 00:07:16.655 05:06:06 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:16.655 05:06:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:16.655 05:06:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.655 05:06:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.655 05:06:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:16.655 05:06:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:16.655 05:06:06 -- accel/accel.sh@41 -- # local IFS=, 00:07:16.655 05:06:06 -- accel/accel.sh@42 -- # jq -r . 00:07:16.655 [2024-12-08 05:06:06.238100] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:16.655 [2024-12-08 05:06:06.238216] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68108 ] 00:07:16.655 [2024-12-08 05:06:06.374982] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.655 [2024-12-08 05:06:06.412060] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:16.655 [2024-12-08 05:06:06.412276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.591 05:06:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:17.591 05:06:07 -- common/autotest_common.sh@862 -- # return 0 00:07:17.591 05:06:07 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:17.591 05:06:07 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:07:17.591 05:06:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.591 05:06:07 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:17.591 05:06:07 -- common/autotest_common.sh@10 -- # set +x 00:07:17.591 05:06:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.591 05:06:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:17.591 05:06:07 -- accel/accel.sh@64 -- # IFS== 00:07:17.591 05:06:07 -- accel/accel.sh@64 -- # read -r opc module 00:07:17.591 05:06:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:17.591 05:06:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:17.591 05:06:07 -- accel/accel.sh@64 -- # IFS== 00:07:17.591 05:06:07 -- accel/accel.sh@64 -- # read -r opc module 00:07:17.591 05:06:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:17.591 05:06:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:17.591 05:06:07 -- accel/accel.sh@64 -- # IFS== 00:07:17.591 05:06:07 -- accel/accel.sh@64 -- # read -r opc module 00:07:17.591 05:06:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:17.591 05:06:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:17.591 05:06:07 -- accel/accel.sh@64 -- # IFS== 00:07:17.591 05:06:07 -- accel/accel.sh@64 -- # read -r opc module 00:07:17.591 05:06:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:17.591 05:06:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:17.591 05:06:07 -- accel/accel.sh@64 -- # IFS== 00:07:17.591 05:06:07 -- accel/accel.sh@64 -- # read -r opc module 00:07:17.591 05:06:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:17.591 05:06:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:17.591 05:06:07 -- accel/accel.sh@64 -- # IFS== 00:07:17.591 05:06:07 -- accel/accel.sh@64 -- # read -r opc module 00:07:17.591 05:06:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:17.591 05:06:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:17.591 05:06:07 -- accel/accel.sh@64 -- # IFS== 00:07:17.591 05:06:07 -- accel/accel.sh@64 -- # read -r opc module 00:07:17.591 05:06:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:17.591 05:06:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:17.591 05:06:07 -- accel/accel.sh@64 -- # IFS== 00:07:17.591 05:06:07 -- accel/accel.sh@64 -- # read -r opc module 00:07:17.591 05:06:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:17.591 05:06:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:17.591 05:06:07 -- accel/accel.sh@64 -- # IFS== 00:07:17.591 05:06:07 -- accel/accel.sh@64 -- # read -r opc module 00:07:17.591 05:06:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:17.591 05:06:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:17.591 05:06:07 -- accel/accel.sh@64 -- # IFS== 00:07:17.591 05:06:07 -- accel/accel.sh@64 -- # read -r opc module 00:07:17.591 05:06:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:17.591 05:06:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:17.591 05:06:07 -- accel/accel.sh@64 -- # IFS== 00:07:17.591 05:06:07 -- accel/accel.sh@64 -- # read -r opc module 00:07:17.591 05:06:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:17.591 05:06:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:17.591 05:06:07 -- accel/accel.sh@64 -- # IFS== 00:07:17.591 05:06:07 -- accel/accel.sh@64 -- # read -r opc module 00:07:17.591 05:06:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:17.591 05:06:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:17.591 05:06:07 -- accel/accel.sh@64 -- # IFS== 00:07:17.591 05:06:07 -- accel/accel.sh@64 -- # read -r opc module 00:07:17.591 05:06:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:17.591 05:06:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:17.591 05:06:07 -- accel/accel.sh@64 -- # IFS== 00:07:17.591 05:06:07 -- accel/accel.sh@64 -- # read -r opc module 00:07:17.591 05:06:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:17.591 05:06:07 -- accel/accel.sh@67 -- # killprocess 68108 00:07:17.591 05:06:07 -- common/autotest_common.sh@936 -- # '[' -z 68108 ']' 00:07:17.591 05:06:07 -- common/autotest_common.sh@940 -- # kill -0 68108 00:07:17.591 05:06:07 -- common/autotest_common.sh@941 -- # uname 00:07:17.591 05:06:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:17.591 05:06:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68108 00:07:17.591 killing process with pid 68108 00:07:17.591 05:06:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:17.591 05:06:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:17.591 05:06:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68108' 00:07:17.591 05:06:07 -- common/autotest_common.sh@955 -- # kill 68108 00:07:17.591 05:06:07 -- common/autotest_common.sh@960 -- # wait 68108 00:07:17.851 05:06:07 -- accel/accel.sh@68 -- # trap - ERR 00:07:17.851 05:06:07 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:07:17.851 05:06:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:17.851 05:06:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:17.851 05:06:07 -- common/autotest_common.sh@10 -- # set +x 00:07:17.851 05:06:07 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:07:17.851 05:06:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:17.851 05:06:07 -- accel/accel.sh@12 -- # build_accel_config 00:07:17.851 05:06:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:17.851 05:06:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.851 05:06:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.851 05:06:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:17.851 05:06:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:17.851 05:06:07 -- accel/accel.sh@41 -- # local IFS=, 00:07:17.851 05:06:07 -- accel/accel.sh@42 -- # jq -r . 00:07:17.851 05:06:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:17.851 05:06:07 -- common/autotest_common.sh@10 -- # set +x 00:07:18.110 05:06:07 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:18.110 05:06:07 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:18.110 05:06:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:18.110 05:06:07 -- common/autotest_common.sh@10 -- # set +x 00:07:18.110 ************************************ 00:07:18.110 START TEST accel_missing_filename 00:07:18.110 ************************************ 00:07:18.110 05:06:07 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:07:18.110 05:06:07 -- common/autotest_common.sh@650 -- # local es=0 00:07:18.111 05:06:07 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:18.111 05:06:07 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:07:18.111 05:06:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:18.111 05:06:07 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:07:18.111 05:06:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:18.111 05:06:07 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:07:18.111 05:06:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:18.111 05:06:07 -- accel/accel.sh@12 -- # build_accel_config 00:07:18.111 05:06:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:18.111 05:06:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.111 05:06:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.111 05:06:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:18.111 05:06:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:18.111 05:06:07 -- accel/accel.sh@41 -- # local IFS=, 00:07:18.111 05:06:07 -- accel/accel.sh@42 -- # jq -r . 00:07:18.111 [2024-12-08 05:06:07.694484] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:18.111 [2024-12-08 05:06:07.694573] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68160 ] 00:07:18.111 [2024-12-08 05:06:07.831875] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.111 [2024-12-08 05:06:07.874657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.370 [2024-12-08 05:06:07.910927] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:18.370 [2024-12-08 05:06:07.954365] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:07:18.370 A filename is required. 00:07:18.370 05:06:08 -- common/autotest_common.sh@653 -- # es=234 00:07:18.370 05:06:08 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:18.370 05:06:08 -- common/autotest_common.sh@662 -- # es=106 00:07:18.370 05:06:08 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:18.370 05:06:08 -- common/autotest_common.sh@670 -- # es=1 00:07:18.370 05:06:08 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:18.370 00:07:18.370 real 0m0.338s 00:07:18.370 user 0m0.203s 00:07:18.370 sys 0m0.081s 00:07:18.370 05:06:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:18.370 05:06:08 -- common/autotest_common.sh@10 -- # set +x 00:07:18.370 ************************************ 00:07:18.370 END TEST accel_missing_filename 00:07:18.370 ************************************ 00:07:18.370 05:06:08 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:18.370 05:06:08 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:07:18.370 05:06:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:18.370 05:06:08 -- common/autotest_common.sh@10 -- # set +x 00:07:18.370 ************************************ 00:07:18.370 START TEST accel_compress_verify 00:07:18.370 ************************************ 00:07:18.370 05:06:08 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:18.370 05:06:08 -- common/autotest_common.sh@650 -- # local es=0 00:07:18.370 05:06:08 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:18.370 05:06:08 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:07:18.370 05:06:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:18.370 05:06:08 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:07:18.370 05:06:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:18.370 05:06:08 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:18.370 05:06:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:18.370 05:06:08 -- accel/accel.sh@12 -- # build_accel_config 00:07:18.370 05:06:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:18.370 05:06:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.370 05:06:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.370 05:06:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:18.370 05:06:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:18.370 05:06:08 -- accel/accel.sh@41 -- # local IFS=, 00:07:18.370 05:06:08 -- accel/accel.sh@42 -- # jq -r . 00:07:18.370 [2024-12-08 05:06:08.079843] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:18.370 [2024-12-08 05:06:08.079939] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68184 ] 00:07:18.629 [2024-12-08 05:06:08.216640] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.629 [2024-12-08 05:06:08.261162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.629 [2024-12-08 05:06:08.298697] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:18.629 [2024-12-08 05:06:08.344244] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:07:18.629 00:07:18.629 Compression does not support the verify option, aborting. 00:07:18.629 05:06:08 -- common/autotest_common.sh@653 -- # es=161 00:07:18.629 05:06:08 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:18.629 05:06:08 -- common/autotest_common.sh@662 -- # es=33 00:07:18.629 05:06:08 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:18.629 05:06:08 -- common/autotest_common.sh@670 -- # es=1 00:07:18.629 05:06:08 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:18.629 00:07:18.629 real 0m0.344s 00:07:18.629 user 0m0.194s 00:07:18.629 sys 0m0.092s 00:07:18.629 05:06:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:18.629 ************************************ 00:07:18.629 05:06:08 -- common/autotest_common.sh@10 -- # set +x 00:07:18.629 END TEST accel_compress_verify 00:07:18.629 ************************************ 00:07:18.889 05:06:08 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:18.889 05:06:08 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:18.889 05:06:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:18.889 05:06:08 -- common/autotest_common.sh@10 -- # set +x 00:07:18.889 ************************************ 00:07:18.889 START TEST accel_wrong_workload 00:07:18.889 ************************************ 00:07:18.889 05:06:08 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:07:18.889 05:06:08 -- common/autotest_common.sh@650 -- # local es=0 00:07:18.889 05:06:08 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:18.889 05:06:08 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:07:18.889 05:06:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:18.889 05:06:08 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:07:18.889 05:06:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:18.889 05:06:08 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:07:18.889 05:06:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:18.889 05:06:08 -- accel/accel.sh@12 -- # build_accel_config 00:07:18.889 05:06:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:18.889 05:06:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.889 05:06:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.889 05:06:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:18.889 05:06:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:18.889 05:06:08 -- accel/accel.sh@41 -- # local IFS=, 00:07:18.889 05:06:08 -- accel/accel.sh@42 -- # jq -r . 00:07:18.889 Unsupported workload type: foobar 00:07:18.889 [2024-12-08 05:06:08.464693] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:18.889 accel_perf options: 00:07:18.889 [-h help message] 00:07:18.889 [-q queue depth per core] 00:07:18.889 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:18.889 [-T number of threads per core 00:07:18.889 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:18.889 [-t time in seconds] 00:07:18.889 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:18.889 [ dif_verify, , dif_generate, dif_generate_copy 00:07:18.889 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:18.889 [-l for compress/decompress workloads, name of uncompressed input file 00:07:18.889 [-S for crc32c workload, use this seed value (default 0) 00:07:18.889 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:18.889 [-f for fill workload, use this BYTE value (default 255) 00:07:18.889 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:18.889 [-y verify result if this switch is on] 00:07:18.889 [-a tasks to allocate per core (default: same value as -q)] 00:07:18.889 Can be used to spread operations across a wider range of memory. 00:07:18.889 05:06:08 -- common/autotest_common.sh@653 -- # es=1 00:07:18.889 05:06:08 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:18.889 05:06:08 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:18.889 05:06:08 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:18.889 00:07:18.889 real 0m0.023s 00:07:18.889 user 0m0.014s 00:07:18.889 sys 0m0.010s 00:07:18.889 05:06:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:18.889 05:06:08 -- common/autotest_common.sh@10 -- # set +x 00:07:18.889 ************************************ 00:07:18.889 END TEST accel_wrong_workload 00:07:18.889 ************************************ 00:07:18.889 05:06:08 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:18.889 05:06:08 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:07:18.889 05:06:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:18.889 05:06:08 -- common/autotest_common.sh@10 -- # set +x 00:07:18.889 ************************************ 00:07:18.889 START TEST accel_negative_buffers 00:07:18.889 ************************************ 00:07:18.889 05:06:08 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:18.889 05:06:08 -- common/autotest_common.sh@650 -- # local es=0 00:07:18.889 05:06:08 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:18.889 05:06:08 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:07:18.889 05:06:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:18.889 05:06:08 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:07:18.890 05:06:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:18.890 05:06:08 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:07:18.890 05:06:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:18.890 05:06:08 -- accel/accel.sh@12 -- # build_accel_config 00:07:18.890 05:06:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:18.890 05:06:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.890 05:06:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.890 05:06:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:18.890 05:06:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:18.890 05:06:08 -- accel/accel.sh@41 -- # local IFS=, 00:07:18.890 05:06:08 -- accel/accel.sh@42 -- # jq -r . 00:07:18.890 -x option must be non-negative. 00:07:18.890 [2024-12-08 05:06:08.538638] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:18.890 accel_perf options: 00:07:18.890 [-h help message] 00:07:18.890 [-q queue depth per core] 00:07:18.890 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:18.890 [-T number of threads per core 00:07:18.890 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:18.890 [-t time in seconds] 00:07:18.890 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:18.890 [ dif_verify, , dif_generate, dif_generate_copy 00:07:18.890 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:18.890 [-l for compress/decompress workloads, name of uncompressed input file 00:07:18.890 [-S for crc32c workload, use this seed value (default 0) 00:07:18.890 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:18.890 [-f for fill workload, use this BYTE value (default 255) 00:07:18.890 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:18.890 [-y verify result if this switch is on] 00:07:18.890 [-a tasks to allocate per core (default: same value as -q)] 00:07:18.890 Can be used to spread operations across a wider range of memory. 00:07:18.890 05:06:08 -- common/autotest_common.sh@653 -- # es=1 00:07:18.890 05:06:08 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:18.890 05:06:08 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:18.890 05:06:08 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:18.890 00:07:18.890 real 0m0.029s 00:07:18.890 user 0m0.016s 00:07:18.890 sys 0m0.013s 00:07:18.890 05:06:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:18.890 05:06:08 -- common/autotest_common.sh@10 -- # set +x 00:07:18.890 ************************************ 00:07:18.890 END TEST accel_negative_buffers 00:07:18.890 ************************************ 00:07:18.890 05:06:08 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:18.890 05:06:08 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:18.890 05:06:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:18.890 05:06:08 -- common/autotest_common.sh@10 -- # set +x 00:07:18.890 ************************************ 00:07:18.890 START TEST accel_crc32c 00:07:18.890 ************************************ 00:07:18.890 05:06:08 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:18.890 05:06:08 -- accel/accel.sh@16 -- # local accel_opc 00:07:18.890 05:06:08 -- accel/accel.sh@17 -- # local accel_module 00:07:18.890 05:06:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:18.890 05:06:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:18.890 05:06:08 -- accel/accel.sh@12 -- # build_accel_config 00:07:18.890 05:06:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:18.890 05:06:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.890 05:06:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.890 05:06:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:18.890 05:06:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:18.890 05:06:08 -- accel/accel.sh@41 -- # local IFS=, 00:07:18.890 05:06:08 -- accel/accel.sh@42 -- # jq -r . 00:07:18.890 [2024-12-08 05:06:08.612052] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:18.890 [2024-12-08 05:06:08.612553] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68237 ] 00:07:19.149 [2024-12-08 05:06:08.752372] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.149 [2024-12-08 05:06:08.799838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.544 05:06:09 -- accel/accel.sh@18 -- # out=' 00:07:20.544 SPDK Configuration: 00:07:20.544 Core mask: 0x1 00:07:20.544 00:07:20.544 Accel Perf Configuration: 00:07:20.544 Workload Type: crc32c 00:07:20.544 CRC-32C seed: 32 00:07:20.544 Transfer size: 4096 bytes 00:07:20.544 Vector count 1 00:07:20.544 Module: software 00:07:20.544 Queue depth: 32 00:07:20.544 Allocate depth: 32 00:07:20.544 # threads/core: 1 00:07:20.544 Run time: 1 seconds 00:07:20.544 Verify: Yes 00:07:20.544 00:07:20.544 Running for 1 seconds... 00:07:20.544 00:07:20.544 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:20.544 ------------------------------------------------------------------------------------ 00:07:20.544 0,0 488512/s 1908 MiB/s 0 0 00:07:20.544 ==================================================================================== 00:07:20.544 Total 488512/s 1908 MiB/s 0 0' 00:07:20.544 05:06:09 -- accel/accel.sh@20 -- # IFS=: 00:07:20.544 05:06:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:20.544 05:06:09 -- accel/accel.sh@20 -- # read -r var val 00:07:20.544 05:06:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:20.544 05:06:09 -- accel/accel.sh@12 -- # build_accel_config 00:07:20.544 05:06:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:20.544 05:06:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.544 05:06:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.544 05:06:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:20.544 05:06:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:20.544 05:06:09 -- accel/accel.sh@41 -- # local IFS=, 00:07:20.544 05:06:09 -- accel/accel.sh@42 -- # jq -r . 00:07:20.544 [2024-12-08 05:06:09.957868] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:20.544 [2024-12-08 05:06:09.957962] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68262 ] 00:07:20.544 [2024-12-08 05:06:10.092950] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.544 [2024-12-08 05:06:10.132710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.544 05:06:10 -- accel/accel.sh@21 -- # val= 00:07:20.544 05:06:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.544 05:06:10 -- accel/accel.sh@20 -- # IFS=: 00:07:20.544 05:06:10 -- accel/accel.sh@20 -- # read -r var val 00:07:20.544 05:06:10 -- accel/accel.sh@21 -- # val= 00:07:20.544 05:06:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.544 05:06:10 -- accel/accel.sh@20 -- # IFS=: 00:07:20.544 05:06:10 -- accel/accel.sh@20 -- # read -r var val 00:07:20.544 05:06:10 -- accel/accel.sh@21 -- # val=0x1 00:07:20.544 05:06:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.544 05:06:10 -- accel/accel.sh@20 -- # IFS=: 00:07:20.544 05:06:10 -- accel/accel.sh@20 -- # read -r var val 00:07:20.544 05:06:10 -- accel/accel.sh@21 -- # val= 00:07:20.544 05:06:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.544 05:06:10 -- accel/accel.sh@20 -- # IFS=: 00:07:20.544 05:06:10 -- accel/accel.sh@20 -- # read -r var val 00:07:20.544 05:06:10 -- accel/accel.sh@21 -- # val= 00:07:20.544 05:06:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.544 05:06:10 -- accel/accel.sh@20 -- # IFS=: 00:07:20.544 05:06:10 -- accel/accel.sh@20 -- # read -r var val 00:07:20.544 05:06:10 -- accel/accel.sh@21 -- # val=crc32c 00:07:20.544 05:06:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.544 05:06:10 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:07:20.544 05:06:10 -- accel/accel.sh@20 -- # IFS=: 00:07:20.544 05:06:10 -- accel/accel.sh@20 -- # read -r var val 00:07:20.544 05:06:10 -- accel/accel.sh@21 -- # val=32 00:07:20.544 05:06:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.544 05:06:10 -- accel/accel.sh@20 -- # IFS=: 00:07:20.544 05:06:10 -- accel/accel.sh@20 -- # read -r var val 00:07:20.544 05:06:10 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:20.544 05:06:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.544 05:06:10 -- accel/accel.sh@20 -- # IFS=: 00:07:20.544 05:06:10 -- accel/accel.sh@20 -- # read -r var val 00:07:20.544 05:06:10 -- accel/accel.sh@21 -- # val= 00:07:20.544 05:06:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.544 05:06:10 -- accel/accel.sh@20 -- # IFS=: 00:07:20.544 05:06:10 -- accel/accel.sh@20 -- # read -r var val 00:07:20.544 05:06:10 -- accel/accel.sh@21 -- # val=software 00:07:20.544 05:06:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.544 05:06:10 -- accel/accel.sh@23 -- # accel_module=software 00:07:20.544 05:06:10 -- accel/accel.sh@20 -- # IFS=: 00:07:20.544 05:06:10 -- accel/accel.sh@20 -- # read -r var val 00:07:20.544 05:06:10 -- accel/accel.sh@21 -- # val=32 00:07:20.544 05:06:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.544 05:06:10 -- accel/accel.sh@20 -- # IFS=: 00:07:20.544 05:06:10 -- accel/accel.sh@20 -- # read -r var val 00:07:20.544 05:06:10 -- accel/accel.sh@21 -- # val=32 00:07:20.544 05:06:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.544 05:06:10 -- accel/accel.sh@20 -- # IFS=: 00:07:20.544 05:06:10 -- accel/accel.sh@20 -- # read -r var val 00:07:20.544 05:06:10 -- accel/accel.sh@21 -- # val=1 00:07:20.544 05:06:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.544 05:06:10 -- accel/accel.sh@20 -- # IFS=: 00:07:20.544 05:06:10 -- accel/accel.sh@20 -- # read -r var val 00:07:20.544 05:06:10 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:20.544 05:06:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.544 05:06:10 -- accel/accel.sh@20 -- # IFS=: 00:07:20.544 05:06:10 -- accel/accel.sh@20 -- # read -r var val 00:07:20.545 05:06:10 -- accel/accel.sh@21 -- # val=Yes 00:07:20.545 05:06:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.545 05:06:10 -- accel/accel.sh@20 -- # IFS=: 00:07:20.545 05:06:10 -- accel/accel.sh@20 -- # read -r var val 00:07:20.545 05:06:10 -- accel/accel.sh@21 -- # val= 00:07:20.545 05:06:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.545 05:06:10 -- accel/accel.sh@20 -- # IFS=: 00:07:20.545 05:06:10 -- accel/accel.sh@20 -- # read -r var val 00:07:20.545 05:06:10 -- accel/accel.sh@21 -- # val= 00:07:20.545 05:06:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.545 05:06:10 -- accel/accel.sh@20 -- # IFS=: 00:07:20.545 05:06:10 -- accel/accel.sh@20 -- # read -r var val 00:07:21.480 05:06:11 -- accel/accel.sh@21 -- # val= 00:07:21.480 05:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.480 05:06:11 -- accel/accel.sh@20 -- # IFS=: 00:07:21.480 05:06:11 -- accel/accel.sh@20 -- # read -r var val 00:07:21.480 05:06:11 -- accel/accel.sh@21 -- # val= 00:07:21.480 05:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.480 05:06:11 -- accel/accel.sh@20 -- # IFS=: 00:07:21.480 05:06:11 -- accel/accel.sh@20 -- # read -r var val 00:07:21.480 05:06:11 -- accel/accel.sh@21 -- # val= 00:07:21.480 05:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.480 05:06:11 -- accel/accel.sh@20 -- # IFS=: 00:07:21.480 05:06:11 -- accel/accel.sh@20 -- # read -r var val 00:07:21.480 05:06:11 -- accel/accel.sh@21 -- # val= 00:07:21.480 05:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.480 05:06:11 -- accel/accel.sh@20 -- # IFS=: 00:07:21.480 05:06:11 -- accel/accel.sh@20 -- # read -r var val 00:07:21.480 05:06:11 -- accel/accel.sh@21 -- # val= 00:07:21.741 05:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.741 05:06:11 -- accel/accel.sh@20 -- # IFS=: 00:07:21.741 05:06:11 -- accel/accel.sh@20 -- # read -r var val 00:07:21.741 05:06:11 -- accel/accel.sh@21 -- # val= 00:07:21.741 05:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.741 05:06:11 -- accel/accel.sh@20 -- # IFS=: 00:07:21.741 05:06:11 -- accel/accel.sh@20 -- # read -r var val 00:07:21.741 05:06:11 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:21.741 05:06:11 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:07:21.741 05:06:11 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.741 00:07:21.741 real 0m2.676s 00:07:21.741 user 0m2.309s 00:07:21.741 sys 0m0.165s 00:07:21.741 05:06:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:21.741 05:06:11 -- common/autotest_common.sh@10 -- # set +x 00:07:21.741 ************************************ 00:07:21.741 END TEST accel_crc32c 00:07:21.741 ************************************ 00:07:21.741 05:06:11 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:21.741 05:06:11 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:21.741 05:06:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:21.741 05:06:11 -- common/autotest_common.sh@10 -- # set +x 00:07:21.741 ************************************ 00:07:21.741 START TEST accel_crc32c_C2 00:07:21.741 ************************************ 00:07:21.741 05:06:11 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:21.741 05:06:11 -- accel/accel.sh@16 -- # local accel_opc 00:07:21.741 05:06:11 -- accel/accel.sh@17 -- # local accel_module 00:07:21.741 05:06:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:21.741 05:06:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:21.741 05:06:11 -- accel/accel.sh@12 -- # build_accel_config 00:07:21.741 05:06:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:21.741 05:06:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.741 05:06:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.741 05:06:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:21.741 05:06:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:21.741 05:06:11 -- accel/accel.sh@41 -- # local IFS=, 00:07:21.741 05:06:11 -- accel/accel.sh@42 -- # jq -r . 00:07:21.741 [2024-12-08 05:06:11.342339] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:21.741 [2024-12-08 05:06:11.342431] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68291 ] 00:07:21.741 [2024-12-08 05:06:11.472287] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.741 [2024-12-08 05:06:11.516720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.122 05:06:12 -- accel/accel.sh@18 -- # out=' 00:07:23.122 SPDK Configuration: 00:07:23.122 Core mask: 0x1 00:07:23.122 00:07:23.122 Accel Perf Configuration: 00:07:23.122 Workload Type: crc32c 00:07:23.122 CRC-32C seed: 0 00:07:23.122 Transfer size: 4096 bytes 00:07:23.122 Vector count 2 00:07:23.122 Module: software 00:07:23.122 Queue depth: 32 00:07:23.122 Allocate depth: 32 00:07:23.122 # threads/core: 1 00:07:23.122 Run time: 1 seconds 00:07:23.122 Verify: Yes 00:07:23.122 00:07:23.122 Running for 1 seconds... 00:07:23.122 00:07:23.122 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:23.122 ------------------------------------------------------------------------------------ 00:07:23.122 0,0 370848/s 2897 MiB/s 0 0 00:07:23.122 ==================================================================================== 00:07:23.122 Total 370848/s 1448 MiB/s 0 0' 00:07:23.122 05:06:12 -- accel/accel.sh@20 -- # IFS=: 00:07:23.122 05:06:12 -- accel/accel.sh@20 -- # read -r var val 00:07:23.122 05:06:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:23.122 05:06:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:23.122 05:06:12 -- accel/accel.sh@12 -- # build_accel_config 00:07:23.122 05:06:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:23.122 05:06:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.122 05:06:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.122 05:06:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:23.122 05:06:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:23.122 05:06:12 -- accel/accel.sh@41 -- # local IFS=, 00:07:23.122 05:06:12 -- accel/accel.sh@42 -- # jq -r . 00:07:23.122 [2024-12-08 05:06:12.679126] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:23.122 [2024-12-08 05:06:12.679211] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68305 ] 00:07:23.122 [2024-12-08 05:06:12.815600] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.122 [2024-12-08 05:06:12.860128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.122 05:06:12 -- accel/accel.sh@21 -- # val= 00:07:23.122 05:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.122 05:06:12 -- accel/accel.sh@20 -- # IFS=: 00:07:23.122 05:06:12 -- accel/accel.sh@20 -- # read -r var val 00:07:23.122 05:06:12 -- accel/accel.sh@21 -- # val= 00:07:23.122 05:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.122 05:06:12 -- accel/accel.sh@20 -- # IFS=: 00:07:23.122 05:06:12 -- accel/accel.sh@20 -- # read -r var val 00:07:23.122 05:06:12 -- accel/accel.sh@21 -- # val=0x1 00:07:23.122 05:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.122 05:06:12 -- accel/accel.sh@20 -- # IFS=: 00:07:23.122 05:06:12 -- accel/accel.sh@20 -- # read -r var val 00:07:23.122 05:06:12 -- accel/accel.sh@21 -- # val= 00:07:23.122 05:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.122 05:06:12 -- accel/accel.sh@20 -- # IFS=: 00:07:23.123 05:06:12 -- accel/accel.sh@20 -- # read -r var val 00:07:23.123 05:06:12 -- accel/accel.sh@21 -- # val= 00:07:23.123 05:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.123 05:06:12 -- accel/accel.sh@20 -- # IFS=: 00:07:23.123 05:06:12 -- accel/accel.sh@20 -- # read -r var val 00:07:23.123 05:06:12 -- accel/accel.sh@21 -- # val=crc32c 00:07:23.123 05:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.123 05:06:12 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:07:23.382 05:06:12 -- accel/accel.sh@20 -- # IFS=: 00:07:23.382 05:06:12 -- accel/accel.sh@20 -- # read -r var val 00:07:23.382 05:06:12 -- accel/accel.sh@21 -- # val=0 00:07:23.382 05:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.382 05:06:12 -- accel/accel.sh@20 -- # IFS=: 00:07:23.382 05:06:12 -- accel/accel.sh@20 -- # read -r var val 00:07:23.382 05:06:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:23.382 05:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.382 05:06:12 -- accel/accel.sh@20 -- # IFS=: 00:07:23.382 05:06:12 -- accel/accel.sh@20 -- # read -r var val 00:07:23.382 05:06:12 -- accel/accel.sh@21 -- # val= 00:07:23.382 05:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.382 05:06:12 -- accel/accel.sh@20 -- # IFS=: 00:07:23.382 05:06:12 -- accel/accel.sh@20 -- # read -r var val 00:07:23.382 05:06:12 -- accel/accel.sh@21 -- # val=software 00:07:23.382 05:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.382 05:06:12 -- accel/accel.sh@23 -- # accel_module=software 00:07:23.382 05:06:12 -- accel/accel.sh@20 -- # IFS=: 00:07:23.382 05:06:12 -- accel/accel.sh@20 -- # read -r var val 00:07:23.382 05:06:12 -- accel/accel.sh@21 -- # val=32 00:07:23.382 05:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.382 05:06:12 -- accel/accel.sh@20 -- # IFS=: 00:07:23.382 05:06:12 -- accel/accel.sh@20 -- # read -r var val 00:07:23.382 05:06:12 -- accel/accel.sh@21 -- # val=32 00:07:23.382 05:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.382 05:06:12 -- accel/accel.sh@20 -- # IFS=: 00:07:23.382 05:06:12 -- accel/accel.sh@20 -- # read -r var val 00:07:23.382 05:06:12 -- accel/accel.sh@21 -- # val=1 00:07:23.382 05:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.382 05:06:12 -- accel/accel.sh@20 -- # IFS=: 00:07:23.382 05:06:12 -- accel/accel.sh@20 -- # read -r var val 00:07:23.382 05:06:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:23.382 05:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.382 05:06:12 -- accel/accel.sh@20 -- # IFS=: 00:07:23.382 05:06:12 -- accel/accel.sh@20 -- # read -r var val 00:07:23.382 05:06:12 -- accel/accel.sh@21 -- # val=Yes 00:07:23.382 05:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.382 05:06:12 -- accel/accel.sh@20 -- # IFS=: 00:07:23.382 05:06:12 -- accel/accel.sh@20 -- # read -r var val 00:07:23.382 05:06:12 -- accel/accel.sh@21 -- # val= 00:07:23.382 05:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.382 05:06:12 -- accel/accel.sh@20 -- # IFS=: 00:07:23.382 05:06:12 -- accel/accel.sh@20 -- # read -r var val 00:07:23.382 05:06:12 -- accel/accel.sh@21 -- # val= 00:07:23.382 05:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.382 05:06:12 -- accel/accel.sh@20 -- # IFS=: 00:07:23.382 05:06:12 -- accel/accel.sh@20 -- # read -r var val 00:07:24.320 05:06:13 -- accel/accel.sh@21 -- # val= 00:07:24.320 05:06:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.320 05:06:13 -- accel/accel.sh@20 -- # IFS=: 00:07:24.320 05:06:13 -- accel/accel.sh@20 -- # read -r var val 00:07:24.320 05:06:13 -- accel/accel.sh@21 -- # val= 00:07:24.320 05:06:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.320 05:06:13 -- accel/accel.sh@20 -- # IFS=: 00:07:24.320 05:06:13 -- accel/accel.sh@20 -- # read -r var val 00:07:24.320 05:06:13 -- accel/accel.sh@21 -- # val= 00:07:24.320 05:06:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.320 05:06:13 -- accel/accel.sh@20 -- # IFS=: 00:07:24.320 05:06:13 -- accel/accel.sh@20 -- # read -r var val 00:07:24.320 05:06:14 -- accel/accel.sh@21 -- # val= 00:07:24.320 05:06:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.320 05:06:14 -- accel/accel.sh@20 -- # IFS=: 00:07:24.320 05:06:14 -- accel/accel.sh@20 -- # read -r var val 00:07:24.320 05:06:14 -- accel/accel.sh@21 -- # val= 00:07:24.320 05:06:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.320 05:06:14 -- accel/accel.sh@20 -- # IFS=: 00:07:24.320 05:06:14 -- accel/accel.sh@20 -- # read -r var val 00:07:24.320 05:06:14 -- accel/accel.sh@21 -- # val= 00:07:24.320 05:06:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.320 05:06:14 -- accel/accel.sh@20 -- # IFS=: 00:07:24.320 05:06:14 -- accel/accel.sh@20 -- # read -r var val 00:07:24.320 05:06:14 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:24.320 05:06:14 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:07:24.320 05:06:14 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:24.320 00:07:24.320 real 0m2.687s 00:07:24.320 user 0m2.301s 00:07:24.320 sys 0m0.176s 00:07:24.320 05:06:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:24.320 05:06:14 -- common/autotest_common.sh@10 -- # set +x 00:07:24.320 ************************************ 00:07:24.320 END TEST accel_crc32c_C2 00:07:24.320 ************************************ 00:07:24.320 05:06:14 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:24.320 05:06:14 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:24.320 05:06:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:24.320 05:06:14 -- common/autotest_common.sh@10 -- # set +x 00:07:24.320 ************************************ 00:07:24.320 START TEST accel_copy 00:07:24.320 ************************************ 00:07:24.320 05:06:14 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:07:24.320 05:06:14 -- accel/accel.sh@16 -- # local accel_opc 00:07:24.320 05:06:14 -- accel/accel.sh@17 -- # local accel_module 00:07:24.320 05:06:14 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:07:24.320 05:06:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:24.320 05:06:14 -- accel/accel.sh@12 -- # build_accel_config 00:07:24.320 05:06:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:24.320 05:06:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.320 05:06:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.320 05:06:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:24.320 05:06:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:24.320 05:06:14 -- accel/accel.sh@41 -- # local IFS=, 00:07:24.320 05:06:14 -- accel/accel.sh@42 -- # jq -r . 00:07:24.320 [2024-12-08 05:06:14.082346] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:24.320 [2024-12-08 05:06:14.082444] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68345 ] 00:07:24.579 [2024-12-08 05:06:14.217358] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.579 [2024-12-08 05:06:14.262630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.954 05:06:15 -- accel/accel.sh@18 -- # out=' 00:07:25.954 SPDK Configuration: 00:07:25.954 Core mask: 0x1 00:07:25.954 00:07:25.954 Accel Perf Configuration: 00:07:25.954 Workload Type: copy 00:07:25.954 Transfer size: 4096 bytes 00:07:25.954 Vector count 1 00:07:25.954 Module: software 00:07:25.954 Queue depth: 32 00:07:25.954 Allocate depth: 32 00:07:25.954 # threads/core: 1 00:07:25.954 Run time: 1 seconds 00:07:25.954 Verify: Yes 00:07:25.954 00:07:25.954 Running for 1 seconds... 00:07:25.954 00:07:25.954 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:25.954 ------------------------------------------------------------------------------------ 00:07:25.954 0,0 335744/s 1311 MiB/s 0 0 00:07:25.954 ==================================================================================== 00:07:25.954 Total 335744/s 1311 MiB/s 0 0' 00:07:25.954 05:06:15 -- accel/accel.sh@20 -- # IFS=: 00:07:25.954 05:06:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:25.954 05:06:15 -- accel/accel.sh@20 -- # read -r var val 00:07:25.954 05:06:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:25.954 05:06:15 -- accel/accel.sh@12 -- # build_accel_config 00:07:25.954 05:06:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:25.954 05:06:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.954 05:06:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.954 05:06:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:25.954 05:06:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:25.954 05:06:15 -- accel/accel.sh@41 -- # local IFS=, 00:07:25.954 05:06:15 -- accel/accel.sh@42 -- # jq -r . 00:07:25.954 [2024-12-08 05:06:15.420611] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:25.954 [2024-12-08 05:06:15.420719] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68359 ] 00:07:25.954 [2024-12-08 05:06:15.555082] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.954 [2024-12-08 05:06:15.598934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.954 05:06:15 -- accel/accel.sh@21 -- # val= 00:07:25.954 05:06:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.954 05:06:15 -- accel/accel.sh@20 -- # IFS=: 00:07:25.954 05:06:15 -- accel/accel.sh@20 -- # read -r var val 00:07:25.954 05:06:15 -- accel/accel.sh@21 -- # val= 00:07:25.954 05:06:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.954 05:06:15 -- accel/accel.sh@20 -- # IFS=: 00:07:25.954 05:06:15 -- accel/accel.sh@20 -- # read -r var val 00:07:25.954 05:06:15 -- accel/accel.sh@21 -- # val=0x1 00:07:25.954 05:06:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.954 05:06:15 -- accel/accel.sh@20 -- # IFS=: 00:07:25.954 05:06:15 -- accel/accel.sh@20 -- # read -r var val 00:07:25.954 05:06:15 -- accel/accel.sh@21 -- # val= 00:07:25.954 05:06:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.954 05:06:15 -- accel/accel.sh@20 -- # IFS=: 00:07:25.954 05:06:15 -- accel/accel.sh@20 -- # read -r var val 00:07:25.954 05:06:15 -- accel/accel.sh@21 -- # val= 00:07:25.954 05:06:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.954 05:06:15 -- accel/accel.sh@20 -- # IFS=: 00:07:25.954 05:06:15 -- accel/accel.sh@20 -- # read -r var val 00:07:25.954 05:06:15 -- accel/accel.sh@21 -- # val=copy 00:07:25.954 05:06:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.954 05:06:15 -- accel/accel.sh@24 -- # accel_opc=copy 00:07:25.954 05:06:15 -- accel/accel.sh@20 -- # IFS=: 00:07:25.954 05:06:15 -- accel/accel.sh@20 -- # read -r var val 00:07:25.954 05:06:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:25.954 05:06:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.954 05:06:15 -- accel/accel.sh@20 -- # IFS=: 00:07:25.954 05:06:15 -- accel/accel.sh@20 -- # read -r var val 00:07:25.954 05:06:15 -- accel/accel.sh@21 -- # val= 00:07:25.954 05:06:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.954 05:06:15 -- accel/accel.sh@20 -- # IFS=: 00:07:25.954 05:06:15 -- accel/accel.sh@20 -- # read -r var val 00:07:25.954 05:06:15 -- accel/accel.sh@21 -- # val=software 00:07:25.954 05:06:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.954 05:06:15 -- accel/accel.sh@23 -- # accel_module=software 00:07:25.954 05:06:15 -- accel/accel.sh@20 -- # IFS=: 00:07:25.955 05:06:15 -- accel/accel.sh@20 -- # read -r var val 00:07:25.955 05:06:15 -- accel/accel.sh@21 -- # val=32 00:07:25.955 05:06:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.955 05:06:15 -- accel/accel.sh@20 -- # IFS=: 00:07:25.955 05:06:15 -- accel/accel.sh@20 -- # read -r var val 00:07:25.955 05:06:15 -- accel/accel.sh@21 -- # val=32 00:07:25.955 05:06:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.955 05:06:15 -- accel/accel.sh@20 -- # IFS=: 00:07:25.955 05:06:15 -- accel/accel.sh@20 -- # read -r var val 00:07:25.955 05:06:15 -- accel/accel.sh@21 -- # val=1 00:07:25.955 05:06:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.955 05:06:15 -- accel/accel.sh@20 -- # IFS=: 00:07:25.955 05:06:15 -- accel/accel.sh@20 -- # read -r var val 00:07:25.955 05:06:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:25.955 05:06:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.955 05:06:15 -- accel/accel.sh@20 -- # IFS=: 00:07:25.955 05:06:15 -- accel/accel.sh@20 -- # read -r var val 00:07:25.955 05:06:15 -- accel/accel.sh@21 -- # val=Yes 00:07:25.955 05:06:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.955 05:06:15 -- accel/accel.sh@20 -- # IFS=: 00:07:25.955 05:06:15 -- accel/accel.sh@20 -- # read -r var val 00:07:25.955 05:06:15 -- accel/accel.sh@21 -- # val= 00:07:25.955 05:06:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.955 05:06:15 -- accel/accel.sh@20 -- # IFS=: 00:07:25.955 05:06:15 -- accel/accel.sh@20 -- # read -r var val 00:07:25.955 05:06:15 -- accel/accel.sh@21 -- # val= 00:07:25.955 05:06:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.955 05:06:15 -- accel/accel.sh@20 -- # IFS=: 00:07:25.955 05:06:15 -- accel/accel.sh@20 -- # read -r var val 00:07:27.331 05:06:16 -- accel/accel.sh@21 -- # val= 00:07:27.331 05:06:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.331 05:06:16 -- accel/accel.sh@20 -- # IFS=: 00:07:27.331 05:06:16 -- accel/accel.sh@20 -- # read -r var val 00:07:27.331 05:06:16 -- accel/accel.sh@21 -- # val= 00:07:27.331 05:06:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.331 05:06:16 -- accel/accel.sh@20 -- # IFS=: 00:07:27.331 05:06:16 -- accel/accel.sh@20 -- # read -r var val 00:07:27.331 05:06:16 -- accel/accel.sh@21 -- # val= 00:07:27.331 05:06:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.331 05:06:16 -- accel/accel.sh@20 -- # IFS=: 00:07:27.331 05:06:16 -- accel/accel.sh@20 -- # read -r var val 00:07:27.331 05:06:16 -- accel/accel.sh@21 -- # val= 00:07:27.331 05:06:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.331 05:06:16 -- accel/accel.sh@20 -- # IFS=: 00:07:27.331 05:06:16 -- accel/accel.sh@20 -- # read -r var val 00:07:27.331 05:06:16 -- accel/accel.sh@21 -- # val= 00:07:27.331 05:06:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.331 05:06:16 -- accel/accel.sh@20 -- # IFS=: 00:07:27.331 05:06:16 -- accel/accel.sh@20 -- # read -r var val 00:07:27.331 05:06:16 -- accel/accel.sh@21 -- # val= 00:07:27.331 05:06:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.331 05:06:16 -- accel/accel.sh@20 -- # IFS=: 00:07:27.331 05:06:16 -- accel/accel.sh@20 -- # read -r var val 00:07:27.331 05:06:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:27.331 05:06:16 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:07:27.331 05:06:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:27.331 00:07:27.331 real 0m2.683s 00:07:27.331 user 0m2.300s 00:07:27.331 sys 0m0.183s 00:07:27.331 05:06:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:27.332 05:06:16 -- common/autotest_common.sh@10 -- # set +x 00:07:27.332 ************************************ 00:07:27.332 END TEST accel_copy 00:07:27.332 ************************************ 00:07:27.332 05:06:16 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:27.332 05:06:16 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:27.332 05:06:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:27.332 05:06:16 -- common/autotest_common.sh@10 -- # set +x 00:07:27.332 ************************************ 00:07:27.332 START TEST accel_fill 00:07:27.332 ************************************ 00:07:27.332 05:06:16 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:27.332 05:06:16 -- accel/accel.sh@16 -- # local accel_opc 00:07:27.332 05:06:16 -- accel/accel.sh@17 -- # local accel_module 00:07:27.332 05:06:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:27.332 05:06:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:27.332 05:06:16 -- accel/accel.sh@12 -- # build_accel_config 00:07:27.332 05:06:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:27.332 05:06:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.332 05:06:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.332 05:06:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:27.332 05:06:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:27.332 05:06:16 -- accel/accel.sh@41 -- # local IFS=, 00:07:27.332 05:06:16 -- accel/accel.sh@42 -- # jq -r . 00:07:27.332 [2024-12-08 05:06:16.810200] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:27.332 [2024-12-08 05:06:16.810295] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68388 ] 00:07:27.332 [2024-12-08 05:06:16.941021] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.332 [2024-12-08 05:06:16.984092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.712 05:06:18 -- accel/accel.sh@18 -- # out=' 00:07:28.712 SPDK Configuration: 00:07:28.712 Core mask: 0x1 00:07:28.712 00:07:28.712 Accel Perf Configuration: 00:07:28.712 Workload Type: fill 00:07:28.712 Fill pattern: 0x80 00:07:28.712 Transfer size: 4096 bytes 00:07:28.712 Vector count 1 00:07:28.712 Module: software 00:07:28.712 Queue depth: 64 00:07:28.712 Allocate depth: 64 00:07:28.712 # threads/core: 1 00:07:28.712 Run time: 1 seconds 00:07:28.712 Verify: Yes 00:07:28.712 00:07:28.712 Running for 1 seconds... 00:07:28.712 00:07:28.712 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:28.712 ------------------------------------------------------------------------------------ 00:07:28.712 0,0 486592/s 1900 MiB/s 0 0 00:07:28.712 ==================================================================================== 00:07:28.712 Total 486592/s 1900 MiB/s 0 0' 00:07:28.712 05:06:18 -- accel/accel.sh@20 -- # IFS=: 00:07:28.712 05:06:18 -- accel/accel.sh@20 -- # read -r var val 00:07:28.712 05:06:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:28.712 05:06:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:28.712 05:06:18 -- accel/accel.sh@12 -- # build_accel_config 00:07:28.712 05:06:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:28.712 05:06:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.712 05:06:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.712 05:06:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:28.712 05:06:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:28.712 05:06:18 -- accel/accel.sh@41 -- # local IFS=, 00:07:28.712 05:06:18 -- accel/accel.sh@42 -- # jq -r . 00:07:28.712 [2024-12-08 05:06:18.145852] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:28.712 [2024-12-08 05:06:18.145940] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68410 ] 00:07:28.712 [2024-12-08 05:06:18.281399] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.712 [2024-12-08 05:06:18.320422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.712 05:06:18 -- accel/accel.sh@21 -- # val= 00:07:28.712 05:06:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.712 05:06:18 -- accel/accel.sh@20 -- # IFS=: 00:07:28.712 05:06:18 -- accel/accel.sh@20 -- # read -r var val 00:07:28.712 05:06:18 -- accel/accel.sh@21 -- # val= 00:07:28.712 05:06:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.712 05:06:18 -- accel/accel.sh@20 -- # IFS=: 00:07:28.712 05:06:18 -- accel/accel.sh@20 -- # read -r var val 00:07:28.712 05:06:18 -- accel/accel.sh@21 -- # val=0x1 00:07:28.712 05:06:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.712 05:06:18 -- accel/accel.sh@20 -- # IFS=: 00:07:28.712 05:06:18 -- accel/accel.sh@20 -- # read -r var val 00:07:28.712 05:06:18 -- accel/accel.sh@21 -- # val= 00:07:28.712 05:06:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.712 05:06:18 -- accel/accel.sh@20 -- # IFS=: 00:07:28.712 05:06:18 -- accel/accel.sh@20 -- # read -r var val 00:07:28.712 05:06:18 -- accel/accel.sh@21 -- # val= 00:07:28.712 05:06:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.712 05:06:18 -- accel/accel.sh@20 -- # IFS=: 00:07:28.712 05:06:18 -- accel/accel.sh@20 -- # read -r var val 00:07:28.712 05:06:18 -- accel/accel.sh@21 -- # val=fill 00:07:28.712 05:06:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.712 05:06:18 -- accel/accel.sh@24 -- # accel_opc=fill 00:07:28.712 05:06:18 -- accel/accel.sh@20 -- # IFS=: 00:07:28.712 05:06:18 -- accel/accel.sh@20 -- # read -r var val 00:07:28.712 05:06:18 -- accel/accel.sh@21 -- # val=0x80 00:07:28.712 05:06:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.712 05:06:18 -- accel/accel.sh@20 -- # IFS=: 00:07:28.712 05:06:18 -- accel/accel.sh@20 -- # read -r var val 00:07:28.712 05:06:18 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:28.712 05:06:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.712 05:06:18 -- accel/accel.sh@20 -- # IFS=: 00:07:28.712 05:06:18 -- accel/accel.sh@20 -- # read -r var val 00:07:28.712 05:06:18 -- accel/accel.sh@21 -- # val= 00:07:28.712 05:06:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.712 05:06:18 -- accel/accel.sh@20 -- # IFS=: 00:07:28.712 05:06:18 -- accel/accel.sh@20 -- # read -r var val 00:07:28.712 05:06:18 -- accel/accel.sh@21 -- # val=software 00:07:28.712 05:06:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.712 05:06:18 -- accel/accel.sh@23 -- # accel_module=software 00:07:28.712 05:06:18 -- accel/accel.sh@20 -- # IFS=: 00:07:28.712 05:06:18 -- accel/accel.sh@20 -- # read -r var val 00:07:28.712 05:06:18 -- accel/accel.sh@21 -- # val=64 00:07:28.712 05:06:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.712 05:06:18 -- accel/accel.sh@20 -- # IFS=: 00:07:28.712 05:06:18 -- accel/accel.sh@20 -- # read -r var val 00:07:28.712 05:06:18 -- accel/accel.sh@21 -- # val=64 00:07:28.712 05:06:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.712 05:06:18 -- accel/accel.sh@20 -- # IFS=: 00:07:28.712 05:06:18 -- accel/accel.sh@20 -- # read -r var val 00:07:28.712 05:06:18 -- accel/accel.sh@21 -- # val=1 00:07:28.712 05:06:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.712 05:06:18 -- accel/accel.sh@20 -- # IFS=: 00:07:28.712 05:06:18 -- accel/accel.sh@20 -- # read -r var val 00:07:28.712 05:06:18 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:28.712 05:06:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.712 05:06:18 -- accel/accel.sh@20 -- # IFS=: 00:07:28.712 05:06:18 -- accel/accel.sh@20 -- # read -r var val 00:07:28.712 05:06:18 -- accel/accel.sh@21 -- # val=Yes 00:07:28.712 05:06:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.712 05:06:18 -- accel/accel.sh@20 -- # IFS=: 00:07:28.712 05:06:18 -- accel/accel.sh@20 -- # read -r var val 00:07:28.712 05:06:18 -- accel/accel.sh@21 -- # val= 00:07:28.712 05:06:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.712 05:06:18 -- accel/accel.sh@20 -- # IFS=: 00:07:28.712 05:06:18 -- accel/accel.sh@20 -- # read -r var val 00:07:28.712 05:06:18 -- accel/accel.sh@21 -- # val= 00:07:28.712 05:06:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.712 05:06:18 -- accel/accel.sh@20 -- # IFS=: 00:07:28.712 05:06:18 -- accel/accel.sh@20 -- # read -r var val 00:07:29.670 05:06:19 -- accel/accel.sh@21 -- # val= 00:07:29.670 05:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.670 05:06:19 -- accel/accel.sh@20 -- # IFS=: 00:07:29.670 05:06:19 -- accel/accel.sh@20 -- # read -r var val 00:07:29.670 05:06:19 -- accel/accel.sh@21 -- # val= 00:07:29.670 05:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.670 05:06:19 -- accel/accel.sh@20 -- # IFS=: 00:07:29.670 05:06:19 -- accel/accel.sh@20 -- # read -r var val 00:07:29.670 05:06:19 -- accel/accel.sh@21 -- # val= 00:07:29.670 05:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.670 05:06:19 -- accel/accel.sh@20 -- # IFS=: 00:07:29.670 05:06:19 -- accel/accel.sh@20 -- # read -r var val 00:07:29.930 05:06:19 -- accel/accel.sh@21 -- # val= 00:07:29.930 05:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.930 05:06:19 -- accel/accel.sh@20 -- # IFS=: 00:07:29.930 05:06:19 -- accel/accel.sh@20 -- # read -r var val 00:07:29.930 05:06:19 -- accel/accel.sh@21 -- # val= 00:07:29.930 05:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.930 05:06:19 -- accel/accel.sh@20 -- # IFS=: 00:07:29.930 05:06:19 -- accel/accel.sh@20 -- # read -r var val 00:07:29.930 05:06:19 -- accel/accel.sh@21 -- # val= 00:07:29.930 05:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.930 05:06:19 -- accel/accel.sh@20 -- # IFS=: 00:07:29.930 05:06:19 -- accel/accel.sh@20 -- # read -r var val 00:07:29.930 05:06:19 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:29.930 05:06:19 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:07:29.930 05:06:19 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:29.930 00:07:29.930 real 0m2.669s 00:07:29.930 user 0m2.295s 00:07:29.930 sys 0m0.167s 00:07:29.930 05:06:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:29.930 ************************************ 00:07:29.930 END TEST accel_fill 00:07:29.930 ************************************ 00:07:29.930 05:06:19 -- common/autotest_common.sh@10 -- # set +x 00:07:29.930 05:06:19 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:29.930 05:06:19 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:29.930 05:06:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:29.930 05:06:19 -- common/autotest_common.sh@10 -- # set +x 00:07:29.930 ************************************ 00:07:29.930 START TEST accel_copy_crc32c 00:07:29.930 ************************************ 00:07:29.930 05:06:19 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:07:29.930 05:06:19 -- accel/accel.sh@16 -- # local accel_opc 00:07:29.930 05:06:19 -- accel/accel.sh@17 -- # local accel_module 00:07:29.930 05:06:19 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:29.930 05:06:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:29.930 05:06:19 -- accel/accel.sh@12 -- # build_accel_config 00:07:29.930 05:06:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:29.930 05:06:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.930 05:06:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.930 05:06:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:29.930 05:06:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:29.930 05:06:19 -- accel/accel.sh@41 -- # local IFS=, 00:07:29.930 05:06:19 -- accel/accel.sh@42 -- # jq -r . 00:07:29.930 [2024-12-08 05:06:19.530590] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:29.930 [2024-12-08 05:06:19.530699] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68444 ] 00:07:29.930 [2024-12-08 05:06:19.667303] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.930 [2024-12-08 05:06:19.714231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.311 05:06:20 -- accel/accel.sh@18 -- # out=' 00:07:31.311 SPDK Configuration: 00:07:31.311 Core mask: 0x1 00:07:31.311 00:07:31.311 Accel Perf Configuration: 00:07:31.312 Workload Type: copy_crc32c 00:07:31.312 CRC-32C seed: 0 00:07:31.312 Vector size: 4096 bytes 00:07:31.312 Transfer size: 4096 bytes 00:07:31.312 Vector count 1 00:07:31.312 Module: software 00:07:31.312 Queue depth: 32 00:07:31.312 Allocate depth: 32 00:07:31.312 # threads/core: 1 00:07:31.312 Run time: 1 seconds 00:07:31.312 Verify: Yes 00:07:31.312 00:07:31.312 Running for 1 seconds... 00:07:31.312 00:07:31.312 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:31.312 ------------------------------------------------------------------------------------ 00:07:31.312 0,0 264192/s 1032 MiB/s 0 0 00:07:31.312 ==================================================================================== 00:07:31.312 Total 264192/s 1032 MiB/s 0 0' 00:07:31.312 05:06:20 -- accel/accel.sh@20 -- # IFS=: 00:07:31.312 05:06:20 -- accel/accel.sh@20 -- # read -r var val 00:07:31.312 05:06:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:31.312 05:06:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:31.312 05:06:20 -- accel/accel.sh@12 -- # build_accel_config 00:07:31.312 05:06:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:31.312 05:06:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.312 05:06:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.312 05:06:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:31.312 05:06:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:31.312 05:06:20 -- accel/accel.sh@41 -- # local IFS=, 00:07:31.312 05:06:20 -- accel/accel.sh@42 -- # jq -r . 00:07:31.312 [2024-12-08 05:06:20.872747] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:31.312 [2024-12-08 05:06:20.872841] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68458 ] 00:07:31.312 [2024-12-08 05:06:21.009359] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.312 [2024-12-08 05:06:21.050366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.312 05:06:21 -- accel/accel.sh@21 -- # val= 00:07:31.312 05:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.312 05:06:21 -- accel/accel.sh@20 -- # IFS=: 00:07:31.312 05:06:21 -- accel/accel.sh@20 -- # read -r var val 00:07:31.312 05:06:21 -- accel/accel.sh@21 -- # val= 00:07:31.312 05:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.312 05:06:21 -- accel/accel.sh@20 -- # IFS=: 00:07:31.312 05:06:21 -- accel/accel.sh@20 -- # read -r var val 00:07:31.312 05:06:21 -- accel/accel.sh@21 -- # val=0x1 00:07:31.312 05:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.312 05:06:21 -- accel/accel.sh@20 -- # IFS=: 00:07:31.312 05:06:21 -- accel/accel.sh@20 -- # read -r var val 00:07:31.312 05:06:21 -- accel/accel.sh@21 -- # val= 00:07:31.312 05:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.312 05:06:21 -- accel/accel.sh@20 -- # IFS=: 00:07:31.312 05:06:21 -- accel/accel.sh@20 -- # read -r var val 00:07:31.312 05:06:21 -- accel/accel.sh@21 -- # val= 00:07:31.312 05:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.312 05:06:21 -- accel/accel.sh@20 -- # IFS=: 00:07:31.312 05:06:21 -- accel/accel.sh@20 -- # read -r var val 00:07:31.312 05:06:21 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:31.312 05:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.312 05:06:21 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:31.312 05:06:21 -- accel/accel.sh@20 -- # IFS=: 00:07:31.312 05:06:21 -- accel/accel.sh@20 -- # read -r var val 00:07:31.312 05:06:21 -- accel/accel.sh@21 -- # val=0 00:07:31.312 05:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.312 05:06:21 -- accel/accel.sh@20 -- # IFS=: 00:07:31.312 05:06:21 -- accel/accel.sh@20 -- # read -r var val 00:07:31.312 05:06:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:31.312 05:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.312 05:06:21 -- accel/accel.sh@20 -- # IFS=: 00:07:31.312 05:06:21 -- accel/accel.sh@20 -- # read -r var val 00:07:31.312 05:06:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:31.312 05:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.312 05:06:21 -- accel/accel.sh@20 -- # IFS=: 00:07:31.312 05:06:21 -- accel/accel.sh@20 -- # read -r var val 00:07:31.312 05:06:21 -- accel/accel.sh@21 -- # val= 00:07:31.312 05:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.312 05:06:21 -- accel/accel.sh@20 -- # IFS=: 00:07:31.312 05:06:21 -- accel/accel.sh@20 -- # read -r var val 00:07:31.312 05:06:21 -- accel/accel.sh@21 -- # val=software 00:07:31.312 05:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.312 05:06:21 -- accel/accel.sh@23 -- # accel_module=software 00:07:31.312 05:06:21 -- accel/accel.sh@20 -- # IFS=: 00:07:31.312 05:06:21 -- accel/accel.sh@20 -- # read -r var val 00:07:31.312 05:06:21 -- accel/accel.sh@21 -- # val=32 00:07:31.312 05:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.312 05:06:21 -- accel/accel.sh@20 -- # IFS=: 00:07:31.572 05:06:21 -- accel/accel.sh@20 -- # read -r var val 00:07:31.572 05:06:21 -- accel/accel.sh@21 -- # val=32 00:07:31.572 05:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.572 05:06:21 -- accel/accel.sh@20 -- # IFS=: 00:07:31.572 05:06:21 -- accel/accel.sh@20 -- # read -r var val 00:07:31.572 05:06:21 -- accel/accel.sh@21 -- # val=1 00:07:31.572 05:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.572 05:06:21 -- accel/accel.sh@20 -- # IFS=: 00:07:31.572 05:06:21 -- accel/accel.sh@20 -- # read -r var val 00:07:31.572 05:06:21 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:31.572 05:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.572 05:06:21 -- accel/accel.sh@20 -- # IFS=: 00:07:31.572 05:06:21 -- accel/accel.sh@20 -- # read -r var val 00:07:31.572 05:06:21 -- accel/accel.sh@21 -- # val=Yes 00:07:31.572 05:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.572 05:06:21 -- accel/accel.sh@20 -- # IFS=: 00:07:31.572 05:06:21 -- accel/accel.sh@20 -- # read -r var val 00:07:31.572 05:06:21 -- accel/accel.sh@21 -- # val= 00:07:31.572 05:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.572 05:06:21 -- accel/accel.sh@20 -- # IFS=: 00:07:31.572 05:06:21 -- accel/accel.sh@20 -- # read -r var val 00:07:31.572 05:06:21 -- accel/accel.sh@21 -- # val= 00:07:31.572 05:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.572 05:06:21 -- accel/accel.sh@20 -- # IFS=: 00:07:31.572 05:06:21 -- accel/accel.sh@20 -- # read -r var val 00:07:32.510 05:06:22 -- accel/accel.sh@21 -- # val= 00:07:32.510 05:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.510 05:06:22 -- accel/accel.sh@20 -- # IFS=: 00:07:32.510 05:06:22 -- accel/accel.sh@20 -- # read -r var val 00:07:32.510 05:06:22 -- accel/accel.sh@21 -- # val= 00:07:32.510 05:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.510 05:06:22 -- accel/accel.sh@20 -- # IFS=: 00:07:32.510 05:06:22 -- accel/accel.sh@20 -- # read -r var val 00:07:32.510 05:06:22 -- accel/accel.sh@21 -- # val= 00:07:32.510 05:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.510 05:06:22 -- accel/accel.sh@20 -- # IFS=: 00:07:32.510 05:06:22 -- accel/accel.sh@20 -- # read -r var val 00:07:32.510 05:06:22 -- accel/accel.sh@21 -- # val= 00:07:32.510 05:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.510 05:06:22 -- accel/accel.sh@20 -- # IFS=: 00:07:32.510 05:06:22 -- accel/accel.sh@20 -- # read -r var val 00:07:32.510 05:06:22 -- accel/accel.sh@21 -- # val= 00:07:32.510 05:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.510 05:06:22 -- accel/accel.sh@20 -- # IFS=: 00:07:32.510 05:06:22 -- accel/accel.sh@20 -- # read -r var val 00:07:32.510 05:06:22 -- accel/accel.sh@21 -- # val= 00:07:32.510 05:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.510 05:06:22 -- accel/accel.sh@20 -- # IFS=: 00:07:32.510 05:06:22 -- accel/accel.sh@20 -- # read -r var val 00:07:32.510 05:06:22 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:32.510 05:06:22 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:32.510 05:06:22 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:32.510 00:07:32.510 real 0m2.677s 00:07:32.510 user 0m2.302s 00:07:32.510 sys 0m0.172s 00:07:32.510 ************************************ 00:07:32.510 END TEST accel_copy_crc32c 00:07:32.510 ************************************ 00:07:32.510 05:06:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:32.510 05:06:22 -- common/autotest_common.sh@10 -- # set +x 00:07:32.510 05:06:22 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:32.510 05:06:22 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:32.510 05:06:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:32.510 05:06:22 -- common/autotest_common.sh@10 -- # set +x 00:07:32.510 ************************************ 00:07:32.510 START TEST accel_copy_crc32c_C2 00:07:32.510 ************************************ 00:07:32.510 05:06:22 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:32.510 05:06:22 -- accel/accel.sh@16 -- # local accel_opc 00:07:32.510 05:06:22 -- accel/accel.sh@17 -- # local accel_module 00:07:32.510 05:06:22 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:32.510 05:06:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:32.510 05:06:22 -- accel/accel.sh@12 -- # build_accel_config 00:07:32.510 05:06:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:32.510 05:06:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.510 05:06:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.510 05:06:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:32.510 05:06:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:32.510 05:06:22 -- accel/accel.sh@41 -- # local IFS=, 00:07:32.510 05:06:22 -- accel/accel.sh@42 -- # jq -r . 00:07:32.510 [2024-12-08 05:06:22.259346] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:32.510 [2024-12-08 05:06:22.259610] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68493 ] 00:07:32.768 [2024-12-08 05:06:22.394607] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.768 [2024-12-08 05:06:22.432127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.145 05:06:23 -- accel/accel.sh@18 -- # out=' 00:07:34.145 SPDK Configuration: 00:07:34.145 Core mask: 0x1 00:07:34.145 00:07:34.145 Accel Perf Configuration: 00:07:34.145 Workload Type: copy_crc32c 00:07:34.145 CRC-32C seed: 0 00:07:34.145 Vector size: 4096 bytes 00:07:34.145 Transfer size: 8192 bytes 00:07:34.145 Vector count 2 00:07:34.145 Module: software 00:07:34.145 Queue depth: 32 00:07:34.145 Allocate depth: 32 00:07:34.145 # threads/core: 1 00:07:34.145 Run time: 1 seconds 00:07:34.145 Verify: Yes 00:07:34.145 00:07:34.145 Running for 1 seconds... 00:07:34.145 00:07:34.145 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:34.145 ------------------------------------------------------------------------------------ 00:07:34.145 0,0 187872/s 1467 MiB/s 0 0 00:07:34.145 ==================================================================================== 00:07:34.145 Total 187872/s 733 MiB/s 0 0' 00:07:34.145 05:06:23 -- accel/accel.sh@20 -- # IFS=: 00:07:34.145 05:06:23 -- accel/accel.sh@20 -- # read -r var val 00:07:34.145 05:06:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:34.145 05:06:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:34.145 05:06:23 -- accel/accel.sh@12 -- # build_accel_config 00:07:34.145 05:06:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:34.145 05:06:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.145 05:06:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.145 05:06:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:34.145 05:06:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:34.145 05:06:23 -- accel/accel.sh@41 -- # local IFS=, 00:07:34.145 05:06:23 -- accel/accel.sh@42 -- # jq -r . 00:07:34.145 [2024-12-08 05:06:23.587286] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:34.145 [2024-12-08 05:06:23.587376] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68512 ] 00:07:34.145 [2024-12-08 05:06:23.722805] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.145 [2024-12-08 05:06:23.768126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.145 05:06:23 -- accel/accel.sh@21 -- # val= 00:07:34.145 05:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.145 05:06:23 -- accel/accel.sh@20 -- # IFS=: 00:07:34.145 05:06:23 -- accel/accel.sh@20 -- # read -r var val 00:07:34.145 05:06:23 -- accel/accel.sh@21 -- # val= 00:07:34.145 05:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.145 05:06:23 -- accel/accel.sh@20 -- # IFS=: 00:07:34.145 05:06:23 -- accel/accel.sh@20 -- # read -r var val 00:07:34.145 05:06:23 -- accel/accel.sh@21 -- # val=0x1 00:07:34.145 05:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.145 05:06:23 -- accel/accel.sh@20 -- # IFS=: 00:07:34.145 05:06:23 -- accel/accel.sh@20 -- # read -r var val 00:07:34.145 05:06:23 -- accel/accel.sh@21 -- # val= 00:07:34.145 05:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.145 05:06:23 -- accel/accel.sh@20 -- # IFS=: 00:07:34.145 05:06:23 -- accel/accel.sh@20 -- # read -r var val 00:07:34.145 05:06:23 -- accel/accel.sh@21 -- # val= 00:07:34.145 05:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.145 05:06:23 -- accel/accel.sh@20 -- # IFS=: 00:07:34.145 05:06:23 -- accel/accel.sh@20 -- # read -r var val 00:07:34.145 05:06:23 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:34.145 05:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.145 05:06:23 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:34.145 05:06:23 -- accel/accel.sh@20 -- # IFS=: 00:07:34.145 05:06:23 -- accel/accel.sh@20 -- # read -r var val 00:07:34.145 05:06:23 -- accel/accel.sh@21 -- # val=0 00:07:34.145 05:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.145 05:06:23 -- accel/accel.sh@20 -- # IFS=: 00:07:34.145 05:06:23 -- accel/accel.sh@20 -- # read -r var val 00:07:34.145 05:06:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:34.145 05:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.145 05:06:23 -- accel/accel.sh@20 -- # IFS=: 00:07:34.145 05:06:23 -- accel/accel.sh@20 -- # read -r var val 00:07:34.145 05:06:23 -- accel/accel.sh@21 -- # val='8192 bytes' 00:07:34.145 05:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.145 05:06:23 -- accel/accel.sh@20 -- # IFS=: 00:07:34.145 05:06:23 -- accel/accel.sh@20 -- # read -r var val 00:07:34.145 05:06:23 -- accel/accel.sh@21 -- # val= 00:07:34.145 05:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.145 05:06:23 -- accel/accel.sh@20 -- # IFS=: 00:07:34.145 05:06:23 -- accel/accel.sh@20 -- # read -r var val 00:07:34.145 05:06:23 -- accel/accel.sh@21 -- # val=software 00:07:34.145 05:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.145 05:06:23 -- accel/accel.sh@23 -- # accel_module=software 00:07:34.145 05:06:23 -- accel/accel.sh@20 -- # IFS=: 00:07:34.145 05:06:23 -- accel/accel.sh@20 -- # read -r var val 00:07:34.145 05:06:23 -- accel/accel.sh@21 -- # val=32 00:07:34.145 05:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.145 05:06:23 -- accel/accel.sh@20 -- # IFS=: 00:07:34.145 05:06:23 -- accel/accel.sh@20 -- # read -r var val 00:07:34.145 05:06:23 -- accel/accel.sh@21 -- # val=32 00:07:34.145 05:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.145 05:06:23 -- accel/accel.sh@20 -- # IFS=: 00:07:34.145 05:06:23 -- accel/accel.sh@20 -- # read -r var val 00:07:34.145 05:06:23 -- accel/accel.sh@21 -- # val=1 00:07:34.145 05:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.145 05:06:23 -- accel/accel.sh@20 -- # IFS=: 00:07:34.145 05:06:23 -- accel/accel.sh@20 -- # read -r var val 00:07:34.145 05:06:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:34.145 05:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.145 05:06:23 -- accel/accel.sh@20 -- # IFS=: 00:07:34.145 05:06:23 -- accel/accel.sh@20 -- # read -r var val 00:07:34.145 05:06:23 -- accel/accel.sh@21 -- # val=Yes 00:07:34.145 05:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.145 05:06:23 -- accel/accel.sh@20 -- # IFS=: 00:07:34.145 05:06:23 -- accel/accel.sh@20 -- # read -r var val 00:07:34.145 05:06:23 -- accel/accel.sh@21 -- # val= 00:07:34.145 05:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.145 05:06:23 -- accel/accel.sh@20 -- # IFS=: 00:07:34.145 05:06:23 -- accel/accel.sh@20 -- # read -r var val 00:07:34.145 05:06:23 -- accel/accel.sh@21 -- # val= 00:07:34.146 05:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.146 05:06:23 -- accel/accel.sh@20 -- # IFS=: 00:07:34.146 05:06:23 -- accel/accel.sh@20 -- # read -r var val 00:07:35.521 05:06:24 -- accel/accel.sh@21 -- # val= 00:07:35.521 05:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.521 05:06:24 -- accel/accel.sh@20 -- # IFS=: 00:07:35.521 05:06:24 -- accel/accel.sh@20 -- # read -r var val 00:07:35.521 05:06:24 -- accel/accel.sh@21 -- # val= 00:07:35.521 05:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.521 05:06:24 -- accel/accel.sh@20 -- # IFS=: 00:07:35.521 05:06:24 -- accel/accel.sh@20 -- # read -r var val 00:07:35.521 05:06:24 -- accel/accel.sh@21 -- # val= 00:07:35.521 05:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.521 05:06:24 -- accel/accel.sh@20 -- # IFS=: 00:07:35.521 05:06:24 -- accel/accel.sh@20 -- # read -r var val 00:07:35.521 05:06:24 -- accel/accel.sh@21 -- # val= 00:07:35.521 05:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.521 05:06:24 -- accel/accel.sh@20 -- # IFS=: 00:07:35.521 05:06:24 -- accel/accel.sh@20 -- # read -r var val 00:07:35.521 05:06:24 -- accel/accel.sh@21 -- # val= 00:07:35.521 05:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.521 05:06:24 -- accel/accel.sh@20 -- # IFS=: 00:07:35.521 05:06:24 -- accel/accel.sh@20 -- # read -r var val 00:07:35.521 05:06:24 -- accel/accel.sh@21 -- # val= 00:07:35.521 05:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.521 05:06:24 -- accel/accel.sh@20 -- # IFS=: 00:07:35.521 05:06:24 -- accel/accel.sh@20 -- # read -r var val 00:07:35.521 05:06:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:35.521 05:06:24 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:35.521 05:06:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:35.521 00:07:35.521 real 0m2.674s 00:07:35.521 user 0m2.301s 00:07:35.521 sys 0m0.169s 00:07:35.521 05:06:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:35.521 05:06:24 -- common/autotest_common.sh@10 -- # set +x 00:07:35.521 ************************************ 00:07:35.521 END TEST accel_copy_crc32c_C2 00:07:35.521 ************************************ 00:07:35.521 05:06:24 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:35.521 05:06:24 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:35.521 05:06:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:35.521 05:06:24 -- common/autotest_common.sh@10 -- # set +x 00:07:35.521 ************************************ 00:07:35.521 START TEST accel_dualcast 00:07:35.521 ************************************ 00:07:35.521 05:06:24 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:07:35.521 05:06:24 -- accel/accel.sh@16 -- # local accel_opc 00:07:35.521 05:06:24 -- accel/accel.sh@17 -- # local accel_module 00:07:35.521 05:06:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:07:35.521 05:06:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:35.521 05:06:24 -- accel/accel.sh@12 -- # build_accel_config 00:07:35.521 05:06:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:35.521 05:06:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.521 05:06:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.521 05:06:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:35.521 05:06:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:35.521 05:06:24 -- accel/accel.sh@41 -- # local IFS=, 00:07:35.521 05:06:24 -- accel/accel.sh@42 -- # jq -r . 00:07:35.521 [2024-12-08 05:06:24.988971] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:35.521 [2024-12-08 05:06:24.989077] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68541 ] 00:07:35.521 [2024-12-08 05:06:25.127790] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.521 [2024-12-08 05:06:25.167468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.894 05:06:26 -- accel/accel.sh@18 -- # out=' 00:07:36.894 SPDK Configuration: 00:07:36.894 Core mask: 0x1 00:07:36.894 00:07:36.894 Accel Perf Configuration: 00:07:36.894 Workload Type: dualcast 00:07:36.894 Transfer size: 4096 bytes 00:07:36.894 Vector count 1 00:07:36.894 Module: software 00:07:36.894 Queue depth: 32 00:07:36.894 Allocate depth: 32 00:07:36.894 # threads/core: 1 00:07:36.894 Run time: 1 seconds 00:07:36.894 Verify: Yes 00:07:36.894 00:07:36.894 Running for 1 seconds... 00:07:36.894 00:07:36.894 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:36.894 ------------------------------------------------------------------------------------ 00:07:36.894 0,0 367648/s 1436 MiB/s 0 0 00:07:36.894 ==================================================================================== 00:07:36.894 Total 367648/s 1436 MiB/s 0 0' 00:07:36.894 05:06:26 -- accel/accel.sh@20 -- # IFS=: 00:07:36.894 05:06:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:36.894 05:06:26 -- accel/accel.sh@20 -- # read -r var val 00:07:36.894 05:06:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:36.894 05:06:26 -- accel/accel.sh@12 -- # build_accel_config 00:07:36.894 05:06:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:36.894 05:06:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.894 05:06:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.894 05:06:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:36.894 05:06:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:36.894 05:06:26 -- accel/accel.sh@41 -- # local IFS=, 00:07:36.894 05:06:26 -- accel/accel.sh@42 -- # jq -r . 00:07:36.894 [2024-12-08 05:06:26.321124] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:36.894 [2024-12-08 05:06:26.321215] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68561 ] 00:07:36.894 [2024-12-08 05:06:26.457765] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.894 [2024-12-08 05:06:26.501483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.894 05:06:26 -- accel/accel.sh@21 -- # val= 00:07:36.894 05:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.894 05:06:26 -- accel/accel.sh@20 -- # IFS=: 00:07:36.894 05:06:26 -- accel/accel.sh@20 -- # read -r var val 00:07:36.894 05:06:26 -- accel/accel.sh@21 -- # val= 00:07:36.894 05:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.894 05:06:26 -- accel/accel.sh@20 -- # IFS=: 00:07:36.894 05:06:26 -- accel/accel.sh@20 -- # read -r var val 00:07:36.894 05:06:26 -- accel/accel.sh@21 -- # val=0x1 00:07:36.894 05:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.894 05:06:26 -- accel/accel.sh@20 -- # IFS=: 00:07:36.894 05:06:26 -- accel/accel.sh@20 -- # read -r var val 00:07:36.894 05:06:26 -- accel/accel.sh@21 -- # val= 00:07:36.894 05:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.894 05:06:26 -- accel/accel.sh@20 -- # IFS=: 00:07:36.894 05:06:26 -- accel/accel.sh@20 -- # read -r var val 00:07:36.894 05:06:26 -- accel/accel.sh@21 -- # val= 00:07:36.894 05:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.894 05:06:26 -- accel/accel.sh@20 -- # IFS=: 00:07:36.894 05:06:26 -- accel/accel.sh@20 -- # read -r var val 00:07:36.894 05:06:26 -- accel/accel.sh@21 -- # val=dualcast 00:07:36.894 05:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.894 05:06:26 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:07:36.894 05:06:26 -- accel/accel.sh@20 -- # IFS=: 00:07:36.894 05:06:26 -- accel/accel.sh@20 -- # read -r var val 00:07:36.894 05:06:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:36.894 05:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.894 05:06:26 -- accel/accel.sh@20 -- # IFS=: 00:07:36.894 05:06:26 -- accel/accel.sh@20 -- # read -r var val 00:07:36.894 05:06:26 -- accel/accel.sh@21 -- # val= 00:07:36.894 05:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.894 05:06:26 -- accel/accel.sh@20 -- # IFS=: 00:07:36.894 05:06:26 -- accel/accel.sh@20 -- # read -r var val 00:07:36.894 05:06:26 -- accel/accel.sh@21 -- # val=software 00:07:36.894 05:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.894 05:06:26 -- accel/accel.sh@23 -- # accel_module=software 00:07:36.894 05:06:26 -- accel/accel.sh@20 -- # IFS=: 00:07:36.894 05:06:26 -- accel/accel.sh@20 -- # read -r var val 00:07:36.894 05:06:26 -- accel/accel.sh@21 -- # val=32 00:07:36.894 05:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.894 05:06:26 -- accel/accel.sh@20 -- # IFS=: 00:07:36.894 05:06:26 -- accel/accel.sh@20 -- # read -r var val 00:07:36.894 05:06:26 -- accel/accel.sh@21 -- # val=32 00:07:36.894 05:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.894 05:06:26 -- accel/accel.sh@20 -- # IFS=: 00:07:36.894 05:06:26 -- accel/accel.sh@20 -- # read -r var val 00:07:36.894 05:06:26 -- accel/accel.sh@21 -- # val=1 00:07:36.894 05:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.894 05:06:26 -- accel/accel.sh@20 -- # IFS=: 00:07:36.894 05:06:26 -- accel/accel.sh@20 -- # read -r var val 00:07:36.894 05:06:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:36.894 05:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.894 05:06:26 -- accel/accel.sh@20 -- # IFS=: 00:07:36.894 05:06:26 -- accel/accel.sh@20 -- # read -r var val 00:07:36.894 05:06:26 -- accel/accel.sh@21 -- # val=Yes 00:07:36.894 05:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.894 05:06:26 -- accel/accel.sh@20 -- # IFS=: 00:07:36.894 05:06:26 -- accel/accel.sh@20 -- # read -r var val 00:07:36.894 05:06:26 -- accel/accel.sh@21 -- # val= 00:07:36.894 05:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.894 05:06:26 -- accel/accel.sh@20 -- # IFS=: 00:07:36.894 05:06:26 -- accel/accel.sh@20 -- # read -r var val 00:07:36.894 05:06:26 -- accel/accel.sh@21 -- # val= 00:07:36.894 05:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.894 05:06:26 -- accel/accel.sh@20 -- # IFS=: 00:07:36.894 05:06:26 -- accel/accel.sh@20 -- # read -r var val 00:07:38.272 05:06:27 -- accel/accel.sh@21 -- # val= 00:07:38.272 05:06:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.272 05:06:27 -- accel/accel.sh@20 -- # IFS=: 00:07:38.272 05:06:27 -- accel/accel.sh@20 -- # read -r var val 00:07:38.272 05:06:27 -- accel/accel.sh@21 -- # val= 00:07:38.272 05:06:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.272 05:06:27 -- accel/accel.sh@20 -- # IFS=: 00:07:38.272 05:06:27 -- accel/accel.sh@20 -- # read -r var val 00:07:38.272 05:06:27 -- accel/accel.sh@21 -- # val= 00:07:38.272 05:06:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.272 05:06:27 -- accel/accel.sh@20 -- # IFS=: 00:07:38.272 05:06:27 -- accel/accel.sh@20 -- # read -r var val 00:07:38.272 05:06:27 -- accel/accel.sh@21 -- # val= 00:07:38.272 05:06:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.272 05:06:27 -- accel/accel.sh@20 -- # IFS=: 00:07:38.272 05:06:27 -- accel/accel.sh@20 -- # read -r var val 00:07:38.272 05:06:27 -- accel/accel.sh@21 -- # val= 00:07:38.272 05:06:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.272 05:06:27 -- accel/accel.sh@20 -- # IFS=: 00:07:38.272 05:06:27 -- accel/accel.sh@20 -- # read -r var val 00:07:38.272 05:06:27 -- accel/accel.sh@21 -- # val= 00:07:38.272 05:06:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.272 05:06:27 -- accel/accel.sh@20 -- # IFS=: 00:07:38.272 05:06:27 -- accel/accel.sh@20 -- # read -r var val 00:07:38.272 05:06:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:38.272 05:06:27 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:07:38.272 05:06:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:38.272 00:07:38.272 real 0m2.683s 00:07:38.272 user 0m2.312s 00:07:38.272 sys 0m0.164s 00:07:38.272 05:06:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:38.272 05:06:27 -- common/autotest_common.sh@10 -- # set +x 00:07:38.272 ************************************ 00:07:38.272 END TEST accel_dualcast 00:07:38.272 ************************************ 00:07:38.272 05:06:27 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:38.272 05:06:27 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:38.272 05:06:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:38.272 05:06:27 -- common/autotest_common.sh@10 -- # set +x 00:07:38.272 ************************************ 00:07:38.272 START TEST accel_compare 00:07:38.272 ************************************ 00:07:38.272 05:06:27 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:07:38.272 05:06:27 -- accel/accel.sh@16 -- # local accel_opc 00:07:38.272 05:06:27 -- accel/accel.sh@17 -- # local accel_module 00:07:38.272 05:06:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:07:38.272 05:06:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:38.272 05:06:27 -- accel/accel.sh@12 -- # build_accel_config 00:07:38.272 05:06:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:38.272 05:06:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.272 05:06:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.272 05:06:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:38.272 05:06:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:38.272 05:06:27 -- accel/accel.sh@41 -- # local IFS=, 00:07:38.272 05:06:27 -- accel/accel.sh@42 -- # jq -r . 00:07:38.272 [2024-12-08 05:06:27.715654] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:38.272 [2024-12-08 05:06:27.715919] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68595 ] 00:07:38.272 [2024-12-08 05:06:27.847833] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.272 [2024-12-08 05:06:27.886856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.647 05:06:29 -- accel/accel.sh@18 -- # out=' 00:07:39.647 SPDK Configuration: 00:07:39.647 Core mask: 0x1 00:07:39.647 00:07:39.647 Accel Perf Configuration: 00:07:39.647 Workload Type: compare 00:07:39.648 Transfer size: 4096 bytes 00:07:39.648 Vector count 1 00:07:39.648 Module: software 00:07:39.648 Queue depth: 32 00:07:39.648 Allocate depth: 32 00:07:39.648 # threads/core: 1 00:07:39.648 Run time: 1 seconds 00:07:39.648 Verify: Yes 00:07:39.648 00:07:39.648 Running for 1 seconds... 00:07:39.648 00:07:39.648 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:39.648 ------------------------------------------------------------------------------------ 00:07:39.648 0,0 478432/s 1868 MiB/s 0 0 00:07:39.648 ==================================================================================== 00:07:39.648 Total 478432/s 1868 MiB/s 0 0' 00:07:39.648 05:06:29 -- accel/accel.sh@20 -- # IFS=: 00:07:39.648 05:06:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:39.648 05:06:29 -- accel/accel.sh@20 -- # read -r var val 00:07:39.648 05:06:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:39.648 05:06:29 -- accel/accel.sh@12 -- # build_accel_config 00:07:39.648 05:06:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:39.648 05:06:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.648 05:06:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.648 05:06:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:39.648 05:06:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:39.648 05:06:29 -- accel/accel.sh@41 -- # local IFS=, 00:07:39.648 05:06:29 -- accel/accel.sh@42 -- # jq -r . 00:07:39.648 [2024-12-08 05:06:29.038480] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:39.648 [2024-12-08 05:06:29.039027] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68609 ] 00:07:39.648 [2024-12-08 05:06:29.172853] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.648 [2024-12-08 05:06:29.210007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.648 05:06:29 -- accel/accel.sh@21 -- # val= 00:07:39.648 05:06:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.648 05:06:29 -- accel/accel.sh@20 -- # IFS=: 00:07:39.648 05:06:29 -- accel/accel.sh@20 -- # read -r var val 00:07:39.648 05:06:29 -- accel/accel.sh@21 -- # val= 00:07:39.648 05:06:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.648 05:06:29 -- accel/accel.sh@20 -- # IFS=: 00:07:39.648 05:06:29 -- accel/accel.sh@20 -- # read -r var val 00:07:39.648 05:06:29 -- accel/accel.sh@21 -- # val=0x1 00:07:39.648 05:06:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.648 05:06:29 -- accel/accel.sh@20 -- # IFS=: 00:07:39.648 05:06:29 -- accel/accel.sh@20 -- # read -r var val 00:07:39.648 05:06:29 -- accel/accel.sh@21 -- # val= 00:07:39.648 05:06:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.648 05:06:29 -- accel/accel.sh@20 -- # IFS=: 00:07:39.648 05:06:29 -- accel/accel.sh@20 -- # read -r var val 00:07:39.648 05:06:29 -- accel/accel.sh@21 -- # val= 00:07:39.648 05:06:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.648 05:06:29 -- accel/accel.sh@20 -- # IFS=: 00:07:39.648 05:06:29 -- accel/accel.sh@20 -- # read -r var val 00:07:39.648 05:06:29 -- accel/accel.sh@21 -- # val=compare 00:07:39.648 05:06:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.648 05:06:29 -- accel/accel.sh@24 -- # accel_opc=compare 00:07:39.648 05:06:29 -- accel/accel.sh@20 -- # IFS=: 00:07:39.648 05:06:29 -- accel/accel.sh@20 -- # read -r var val 00:07:39.648 05:06:29 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:39.648 05:06:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.648 05:06:29 -- accel/accel.sh@20 -- # IFS=: 00:07:39.648 05:06:29 -- accel/accel.sh@20 -- # read -r var val 00:07:39.648 05:06:29 -- accel/accel.sh@21 -- # val= 00:07:39.648 05:06:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.648 05:06:29 -- accel/accel.sh@20 -- # IFS=: 00:07:39.648 05:06:29 -- accel/accel.sh@20 -- # read -r var val 00:07:39.648 05:06:29 -- accel/accel.sh@21 -- # val=software 00:07:39.648 05:06:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.648 05:06:29 -- accel/accel.sh@23 -- # accel_module=software 00:07:39.648 05:06:29 -- accel/accel.sh@20 -- # IFS=: 00:07:39.648 05:06:29 -- accel/accel.sh@20 -- # read -r var val 00:07:39.648 05:06:29 -- accel/accel.sh@21 -- # val=32 00:07:39.648 05:06:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.648 05:06:29 -- accel/accel.sh@20 -- # IFS=: 00:07:39.648 05:06:29 -- accel/accel.sh@20 -- # read -r var val 00:07:39.648 05:06:29 -- accel/accel.sh@21 -- # val=32 00:07:39.648 05:06:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.648 05:06:29 -- accel/accel.sh@20 -- # IFS=: 00:07:39.648 05:06:29 -- accel/accel.sh@20 -- # read -r var val 00:07:39.648 05:06:29 -- accel/accel.sh@21 -- # val=1 00:07:39.648 05:06:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.648 05:06:29 -- accel/accel.sh@20 -- # IFS=: 00:07:39.648 05:06:29 -- accel/accel.sh@20 -- # read -r var val 00:07:39.648 05:06:29 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:39.648 05:06:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.648 05:06:29 -- accel/accel.sh@20 -- # IFS=: 00:07:39.648 05:06:29 -- accel/accel.sh@20 -- # read -r var val 00:07:39.648 05:06:29 -- accel/accel.sh@21 -- # val=Yes 00:07:39.648 05:06:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.648 05:06:29 -- accel/accel.sh@20 -- # IFS=: 00:07:39.648 05:06:29 -- accel/accel.sh@20 -- # read -r var val 00:07:39.648 05:06:29 -- accel/accel.sh@21 -- # val= 00:07:39.648 05:06:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.648 05:06:29 -- accel/accel.sh@20 -- # IFS=: 00:07:39.648 05:06:29 -- accel/accel.sh@20 -- # read -r var val 00:07:39.648 05:06:29 -- accel/accel.sh@21 -- # val= 00:07:39.648 05:06:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.648 05:06:29 -- accel/accel.sh@20 -- # IFS=: 00:07:39.648 05:06:29 -- accel/accel.sh@20 -- # read -r var val 00:07:40.587 05:06:30 -- accel/accel.sh@21 -- # val= 00:07:40.587 05:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.587 05:06:30 -- accel/accel.sh@20 -- # IFS=: 00:07:40.587 05:06:30 -- accel/accel.sh@20 -- # read -r var val 00:07:40.587 05:06:30 -- accel/accel.sh@21 -- # val= 00:07:40.587 05:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.587 05:06:30 -- accel/accel.sh@20 -- # IFS=: 00:07:40.587 05:06:30 -- accel/accel.sh@20 -- # read -r var val 00:07:40.587 05:06:30 -- accel/accel.sh@21 -- # val= 00:07:40.587 05:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.587 05:06:30 -- accel/accel.sh@20 -- # IFS=: 00:07:40.587 05:06:30 -- accel/accel.sh@20 -- # read -r var val 00:07:40.587 05:06:30 -- accel/accel.sh@21 -- # val= 00:07:40.587 ************************************ 00:07:40.587 END TEST accel_compare 00:07:40.587 ************************************ 00:07:40.587 05:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.587 05:06:30 -- accel/accel.sh@20 -- # IFS=: 00:07:40.587 05:06:30 -- accel/accel.sh@20 -- # read -r var val 00:07:40.587 05:06:30 -- accel/accel.sh@21 -- # val= 00:07:40.587 05:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.587 05:06:30 -- accel/accel.sh@20 -- # IFS=: 00:07:40.587 05:06:30 -- accel/accel.sh@20 -- # read -r var val 00:07:40.587 05:06:30 -- accel/accel.sh@21 -- # val= 00:07:40.587 05:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.587 05:06:30 -- accel/accel.sh@20 -- # IFS=: 00:07:40.587 05:06:30 -- accel/accel.sh@20 -- # read -r var val 00:07:40.587 05:06:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:40.587 05:06:30 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:07:40.587 05:06:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:40.587 00:07:40.587 real 0m2.651s 00:07:40.587 user 0m2.293s 00:07:40.587 sys 0m0.152s 00:07:40.587 05:06:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:40.587 05:06:30 -- common/autotest_common.sh@10 -- # set +x 00:07:40.847 05:06:30 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:40.847 05:06:30 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:40.847 05:06:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:40.848 05:06:30 -- common/autotest_common.sh@10 -- # set +x 00:07:40.848 ************************************ 00:07:40.848 START TEST accel_xor 00:07:40.848 ************************************ 00:07:40.848 05:06:30 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:07:40.848 05:06:30 -- accel/accel.sh@16 -- # local accel_opc 00:07:40.848 05:06:30 -- accel/accel.sh@17 -- # local accel_module 00:07:40.848 05:06:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:07:40.848 05:06:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:40.848 05:06:30 -- accel/accel.sh@12 -- # build_accel_config 00:07:40.848 05:06:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:40.848 05:06:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.848 05:06:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.848 05:06:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:40.848 05:06:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:40.848 05:06:30 -- accel/accel.sh@41 -- # local IFS=, 00:07:40.848 05:06:30 -- accel/accel.sh@42 -- # jq -r . 00:07:40.848 [2024-12-08 05:06:30.424486] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:40.848 [2024-12-08 05:06:30.424608] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68644 ] 00:07:40.848 [2024-12-08 05:06:30.562001] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.848 [2024-12-08 05:06:30.600687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.227 05:06:31 -- accel/accel.sh@18 -- # out=' 00:07:42.227 SPDK Configuration: 00:07:42.227 Core mask: 0x1 00:07:42.227 00:07:42.227 Accel Perf Configuration: 00:07:42.227 Workload Type: xor 00:07:42.227 Source buffers: 2 00:07:42.227 Transfer size: 4096 bytes 00:07:42.227 Vector count 1 00:07:42.227 Module: software 00:07:42.227 Queue depth: 32 00:07:42.227 Allocate depth: 32 00:07:42.227 # threads/core: 1 00:07:42.227 Run time: 1 seconds 00:07:42.227 Verify: Yes 00:07:42.227 00:07:42.227 Running for 1 seconds... 00:07:42.227 00:07:42.227 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:42.227 ------------------------------------------------------------------------------------ 00:07:42.227 0,0 255680/s 998 MiB/s 0 0 00:07:42.227 ==================================================================================== 00:07:42.227 Total 255680/s 998 MiB/s 0 0' 00:07:42.227 05:06:31 -- accel/accel.sh@20 -- # IFS=: 00:07:42.227 05:06:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:42.227 05:06:31 -- accel/accel.sh@20 -- # read -r var val 00:07:42.227 05:06:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:42.227 05:06:31 -- accel/accel.sh@12 -- # build_accel_config 00:07:42.227 05:06:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:42.227 05:06:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.227 05:06:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.227 05:06:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:42.227 05:06:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:42.227 05:06:31 -- accel/accel.sh@41 -- # local IFS=, 00:07:42.227 05:06:31 -- accel/accel.sh@42 -- # jq -r . 00:07:42.227 [2024-12-08 05:06:31.754436] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:42.227 [2024-12-08 05:06:31.754526] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68658 ] 00:07:42.227 [2024-12-08 05:06:31.891596] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.227 [2024-12-08 05:06:31.931199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.227 05:06:31 -- accel/accel.sh@21 -- # val= 00:07:42.227 05:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.227 05:06:31 -- accel/accel.sh@20 -- # IFS=: 00:07:42.227 05:06:31 -- accel/accel.sh@20 -- # read -r var val 00:07:42.227 05:06:31 -- accel/accel.sh@21 -- # val= 00:07:42.227 05:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.227 05:06:31 -- accel/accel.sh@20 -- # IFS=: 00:07:42.227 05:06:31 -- accel/accel.sh@20 -- # read -r var val 00:07:42.227 05:06:31 -- accel/accel.sh@21 -- # val=0x1 00:07:42.227 05:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.227 05:06:31 -- accel/accel.sh@20 -- # IFS=: 00:07:42.227 05:06:31 -- accel/accel.sh@20 -- # read -r var val 00:07:42.227 05:06:31 -- accel/accel.sh@21 -- # val= 00:07:42.227 05:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.227 05:06:31 -- accel/accel.sh@20 -- # IFS=: 00:07:42.227 05:06:31 -- accel/accel.sh@20 -- # read -r var val 00:07:42.227 05:06:31 -- accel/accel.sh@21 -- # val= 00:07:42.227 05:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.227 05:06:31 -- accel/accel.sh@20 -- # IFS=: 00:07:42.227 05:06:31 -- accel/accel.sh@20 -- # read -r var val 00:07:42.227 05:06:31 -- accel/accel.sh@21 -- # val=xor 00:07:42.227 05:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.227 05:06:31 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:42.227 05:06:31 -- accel/accel.sh@20 -- # IFS=: 00:07:42.227 05:06:31 -- accel/accel.sh@20 -- # read -r var val 00:07:42.227 05:06:31 -- accel/accel.sh@21 -- # val=2 00:07:42.227 05:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.227 05:06:31 -- accel/accel.sh@20 -- # IFS=: 00:07:42.227 05:06:31 -- accel/accel.sh@20 -- # read -r var val 00:07:42.227 05:06:31 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:42.227 05:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.227 05:06:31 -- accel/accel.sh@20 -- # IFS=: 00:07:42.227 05:06:31 -- accel/accel.sh@20 -- # read -r var val 00:07:42.227 05:06:31 -- accel/accel.sh@21 -- # val= 00:07:42.227 05:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.227 05:06:31 -- accel/accel.sh@20 -- # IFS=: 00:07:42.227 05:06:31 -- accel/accel.sh@20 -- # read -r var val 00:07:42.227 05:06:31 -- accel/accel.sh@21 -- # val=software 00:07:42.227 05:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.227 05:06:31 -- accel/accel.sh@23 -- # accel_module=software 00:07:42.227 05:06:31 -- accel/accel.sh@20 -- # IFS=: 00:07:42.227 05:06:31 -- accel/accel.sh@20 -- # read -r var val 00:07:42.227 05:06:31 -- accel/accel.sh@21 -- # val=32 00:07:42.227 05:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.227 05:06:31 -- accel/accel.sh@20 -- # IFS=: 00:07:42.227 05:06:31 -- accel/accel.sh@20 -- # read -r var val 00:07:42.227 05:06:31 -- accel/accel.sh@21 -- # val=32 00:07:42.227 05:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.227 05:06:31 -- accel/accel.sh@20 -- # IFS=: 00:07:42.227 05:06:31 -- accel/accel.sh@20 -- # read -r var val 00:07:42.227 05:06:31 -- accel/accel.sh@21 -- # val=1 00:07:42.227 05:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.227 05:06:31 -- accel/accel.sh@20 -- # IFS=: 00:07:42.227 05:06:31 -- accel/accel.sh@20 -- # read -r var val 00:07:42.227 05:06:31 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:42.227 05:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.227 05:06:31 -- accel/accel.sh@20 -- # IFS=: 00:07:42.227 05:06:31 -- accel/accel.sh@20 -- # read -r var val 00:07:42.227 05:06:31 -- accel/accel.sh@21 -- # val=Yes 00:07:42.227 05:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.227 05:06:31 -- accel/accel.sh@20 -- # IFS=: 00:07:42.227 05:06:31 -- accel/accel.sh@20 -- # read -r var val 00:07:42.227 05:06:31 -- accel/accel.sh@21 -- # val= 00:07:42.227 05:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.227 05:06:31 -- accel/accel.sh@20 -- # IFS=: 00:07:42.227 05:06:31 -- accel/accel.sh@20 -- # read -r var val 00:07:42.228 05:06:31 -- accel/accel.sh@21 -- # val= 00:07:42.228 05:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.228 05:06:31 -- accel/accel.sh@20 -- # IFS=: 00:07:42.228 05:06:31 -- accel/accel.sh@20 -- # read -r var val 00:07:43.636 05:06:33 -- accel/accel.sh@21 -- # val= 00:07:43.636 05:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.636 05:06:33 -- accel/accel.sh@20 -- # IFS=: 00:07:43.636 05:06:33 -- accel/accel.sh@20 -- # read -r var val 00:07:43.636 05:06:33 -- accel/accel.sh@21 -- # val= 00:07:43.636 05:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.636 05:06:33 -- accel/accel.sh@20 -- # IFS=: 00:07:43.636 05:06:33 -- accel/accel.sh@20 -- # read -r var val 00:07:43.636 05:06:33 -- accel/accel.sh@21 -- # val= 00:07:43.636 05:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.636 05:06:33 -- accel/accel.sh@20 -- # IFS=: 00:07:43.636 05:06:33 -- accel/accel.sh@20 -- # read -r var val 00:07:43.636 05:06:33 -- accel/accel.sh@21 -- # val= 00:07:43.636 05:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.636 05:06:33 -- accel/accel.sh@20 -- # IFS=: 00:07:43.636 05:06:33 -- accel/accel.sh@20 -- # read -r var val 00:07:43.636 05:06:33 -- accel/accel.sh@21 -- # val= 00:07:43.636 05:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.636 05:06:33 -- accel/accel.sh@20 -- # IFS=: 00:07:43.636 05:06:33 -- accel/accel.sh@20 -- # read -r var val 00:07:43.636 05:06:33 -- accel/accel.sh@21 -- # val= 00:07:43.636 05:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.636 05:06:33 -- accel/accel.sh@20 -- # IFS=: 00:07:43.636 05:06:33 -- accel/accel.sh@20 -- # read -r var val 00:07:43.636 05:06:33 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:43.636 05:06:33 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:43.636 05:06:33 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:43.636 00:07:43.636 real 0m2.670s 00:07:43.636 user 0m2.301s 00:07:43.636 sys 0m0.164s 00:07:43.636 ************************************ 00:07:43.636 END TEST accel_xor 00:07:43.636 ************************************ 00:07:43.636 05:06:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:43.636 05:06:33 -- common/autotest_common.sh@10 -- # set +x 00:07:43.636 05:06:33 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:43.636 05:06:33 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:43.636 05:06:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:43.636 05:06:33 -- common/autotest_common.sh@10 -- # set +x 00:07:43.636 ************************************ 00:07:43.636 START TEST accel_xor 00:07:43.636 ************************************ 00:07:43.636 05:06:33 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:07:43.636 05:06:33 -- accel/accel.sh@16 -- # local accel_opc 00:07:43.636 05:06:33 -- accel/accel.sh@17 -- # local accel_module 00:07:43.636 05:06:33 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:07:43.636 05:06:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:43.636 05:06:33 -- accel/accel.sh@12 -- # build_accel_config 00:07:43.636 05:06:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:43.636 05:06:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.636 05:06:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.636 05:06:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:43.637 05:06:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:43.637 05:06:33 -- accel/accel.sh@41 -- # local IFS=, 00:07:43.637 05:06:33 -- accel/accel.sh@42 -- # jq -r . 00:07:43.637 [2024-12-08 05:06:33.142347] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:43.637 [2024-12-08 05:06:33.142437] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68692 ] 00:07:43.637 [2024-12-08 05:06:33.280197] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.637 [2024-12-08 05:06:33.316333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.015 05:06:34 -- accel/accel.sh@18 -- # out=' 00:07:45.015 SPDK Configuration: 00:07:45.015 Core mask: 0x1 00:07:45.015 00:07:45.015 Accel Perf Configuration: 00:07:45.015 Workload Type: xor 00:07:45.015 Source buffers: 3 00:07:45.015 Transfer size: 4096 bytes 00:07:45.015 Vector count 1 00:07:45.015 Module: software 00:07:45.015 Queue depth: 32 00:07:45.015 Allocate depth: 32 00:07:45.015 # threads/core: 1 00:07:45.015 Run time: 1 seconds 00:07:45.015 Verify: Yes 00:07:45.015 00:07:45.015 Running for 1 seconds... 00:07:45.015 00:07:45.015 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:45.015 ------------------------------------------------------------------------------------ 00:07:45.015 0,0 237120/s 926 MiB/s 0 0 00:07:45.015 ==================================================================================== 00:07:45.015 Total 237120/s 926 MiB/s 0 0' 00:07:45.015 05:06:34 -- accel/accel.sh@20 -- # IFS=: 00:07:45.015 05:06:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:45.015 05:06:34 -- accel/accel.sh@20 -- # read -r var val 00:07:45.015 05:06:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:45.015 05:06:34 -- accel/accel.sh@12 -- # build_accel_config 00:07:45.015 05:06:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:45.015 05:06:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.015 05:06:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.015 05:06:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:45.015 05:06:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:45.015 05:06:34 -- accel/accel.sh@41 -- # local IFS=, 00:07:45.015 05:06:34 -- accel/accel.sh@42 -- # jq -r . 00:07:45.015 [2024-12-08 05:06:34.466801] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:45.015 [2024-12-08 05:06:34.467064] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68712 ] 00:07:45.015 [2024-12-08 05:06:34.599671] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.015 [2024-12-08 05:06:34.635001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.015 05:06:34 -- accel/accel.sh@21 -- # val= 00:07:45.015 05:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.015 05:06:34 -- accel/accel.sh@20 -- # IFS=: 00:07:45.015 05:06:34 -- accel/accel.sh@20 -- # read -r var val 00:07:45.015 05:06:34 -- accel/accel.sh@21 -- # val= 00:07:45.015 05:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.015 05:06:34 -- accel/accel.sh@20 -- # IFS=: 00:07:45.015 05:06:34 -- accel/accel.sh@20 -- # read -r var val 00:07:45.015 05:06:34 -- accel/accel.sh@21 -- # val=0x1 00:07:45.015 05:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.015 05:06:34 -- accel/accel.sh@20 -- # IFS=: 00:07:45.015 05:06:34 -- accel/accel.sh@20 -- # read -r var val 00:07:45.015 05:06:34 -- accel/accel.sh@21 -- # val= 00:07:45.015 05:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.015 05:06:34 -- accel/accel.sh@20 -- # IFS=: 00:07:45.015 05:06:34 -- accel/accel.sh@20 -- # read -r var val 00:07:45.015 05:06:34 -- accel/accel.sh@21 -- # val= 00:07:45.015 05:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.015 05:06:34 -- accel/accel.sh@20 -- # IFS=: 00:07:45.015 05:06:34 -- accel/accel.sh@20 -- # read -r var val 00:07:45.015 05:06:34 -- accel/accel.sh@21 -- # val=xor 00:07:45.015 05:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.015 05:06:34 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:45.015 05:06:34 -- accel/accel.sh@20 -- # IFS=: 00:07:45.015 05:06:34 -- accel/accel.sh@20 -- # read -r var val 00:07:45.015 05:06:34 -- accel/accel.sh@21 -- # val=3 00:07:45.015 05:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.015 05:06:34 -- accel/accel.sh@20 -- # IFS=: 00:07:45.015 05:06:34 -- accel/accel.sh@20 -- # read -r var val 00:07:45.015 05:06:34 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:45.015 05:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.015 05:06:34 -- accel/accel.sh@20 -- # IFS=: 00:07:45.015 05:06:34 -- accel/accel.sh@20 -- # read -r var val 00:07:45.015 05:06:34 -- accel/accel.sh@21 -- # val= 00:07:45.015 05:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.015 05:06:34 -- accel/accel.sh@20 -- # IFS=: 00:07:45.015 05:06:34 -- accel/accel.sh@20 -- # read -r var val 00:07:45.015 05:06:34 -- accel/accel.sh@21 -- # val=software 00:07:45.015 05:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.015 05:06:34 -- accel/accel.sh@23 -- # accel_module=software 00:07:45.015 05:06:34 -- accel/accel.sh@20 -- # IFS=: 00:07:45.015 05:06:34 -- accel/accel.sh@20 -- # read -r var val 00:07:45.015 05:06:34 -- accel/accel.sh@21 -- # val=32 00:07:45.015 05:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.015 05:06:34 -- accel/accel.sh@20 -- # IFS=: 00:07:45.015 05:06:34 -- accel/accel.sh@20 -- # read -r var val 00:07:45.015 05:06:34 -- accel/accel.sh@21 -- # val=32 00:07:45.015 05:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.015 05:06:34 -- accel/accel.sh@20 -- # IFS=: 00:07:45.015 05:06:34 -- accel/accel.sh@20 -- # read -r var val 00:07:45.015 05:06:34 -- accel/accel.sh@21 -- # val=1 00:07:45.015 05:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.015 05:06:34 -- accel/accel.sh@20 -- # IFS=: 00:07:45.015 05:06:34 -- accel/accel.sh@20 -- # read -r var val 00:07:45.015 05:06:34 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:45.015 05:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.015 05:06:34 -- accel/accel.sh@20 -- # IFS=: 00:07:45.015 05:06:34 -- accel/accel.sh@20 -- # read -r var val 00:07:45.015 05:06:34 -- accel/accel.sh@21 -- # val=Yes 00:07:45.015 05:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.015 05:06:34 -- accel/accel.sh@20 -- # IFS=: 00:07:45.015 05:06:34 -- accel/accel.sh@20 -- # read -r var val 00:07:45.015 05:06:34 -- accel/accel.sh@21 -- # val= 00:07:45.015 05:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.015 05:06:34 -- accel/accel.sh@20 -- # IFS=: 00:07:45.015 05:06:34 -- accel/accel.sh@20 -- # read -r var val 00:07:45.015 05:06:34 -- accel/accel.sh@21 -- # val= 00:07:45.015 05:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.015 05:06:34 -- accel/accel.sh@20 -- # IFS=: 00:07:45.015 05:06:34 -- accel/accel.sh@20 -- # read -r var val 00:07:46.390 05:06:35 -- accel/accel.sh@21 -- # val= 00:07:46.390 05:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.390 05:06:35 -- accel/accel.sh@20 -- # IFS=: 00:07:46.390 05:06:35 -- accel/accel.sh@20 -- # read -r var val 00:07:46.390 05:06:35 -- accel/accel.sh@21 -- # val= 00:07:46.390 05:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.390 05:06:35 -- accel/accel.sh@20 -- # IFS=: 00:07:46.390 05:06:35 -- accel/accel.sh@20 -- # read -r var val 00:07:46.390 05:06:35 -- accel/accel.sh@21 -- # val= 00:07:46.390 05:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.390 05:06:35 -- accel/accel.sh@20 -- # IFS=: 00:07:46.390 05:06:35 -- accel/accel.sh@20 -- # read -r var val 00:07:46.390 05:06:35 -- accel/accel.sh@21 -- # val= 00:07:46.390 05:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.390 05:06:35 -- accel/accel.sh@20 -- # IFS=: 00:07:46.390 05:06:35 -- accel/accel.sh@20 -- # read -r var val 00:07:46.390 05:06:35 -- accel/accel.sh@21 -- # val= 00:07:46.390 05:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.390 05:06:35 -- accel/accel.sh@20 -- # IFS=: 00:07:46.390 05:06:35 -- accel/accel.sh@20 -- # read -r var val 00:07:46.390 05:06:35 -- accel/accel.sh@21 -- # val= 00:07:46.390 05:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.390 05:06:35 -- accel/accel.sh@20 -- # IFS=: 00:07:46.390 05:06:35 -- accel/accel.sh@20 -- # read -r var val 00:07:46.390 05:06:35 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:46.390 05:06:35 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:46.390 05:06:35 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:46.390 ************************************ 00:07:46.390 END TEST accel_xor 00:07:46.390 ************************************ 00:07:46.390 00:07:46.390 real 0m2.652s 00:07:46.390 user 0m2.295s 00:07:46.390 sys 0m0.152s 00:07:46.390 05:06:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:46.390 05:06:35 -- common/autotest_common.sh@10 -- # set +x 00:07:46.390 05:06:35 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:46.390 05:06:35 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:46.390 05:06:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:46.390 05:06:35 -- common/autotest_common.sh@10 -- # set +x 00:07:46.390 ************************************ 00:07:46.390 START TEST accel_dif_verify 00:07:46.390 ************************************ 00:07:46.390 05:06:35 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:07:46.390 05:06:35 -- accel/accel.sh@16 -- # local accel_opc 00:07:46.390 05:06:35 -- accel/accel.sh@17 -- # local accel_module 00:07:46.390 05:06:35 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:07:46.390 05:06:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:46.390 05:06:35 -- accel/accel.sh@12 -- # build_accel_config 00:07:46.390 05:06:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:46.390 05:06:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:46.390 05:06:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:46.390 05:06:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:46.390 05:06:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:46.390 05:06:35 -- accel/accel.sh@41 -- # local IFS=, 00:07:46.390 05:06:35 -- accel/accel.sh@42 -- # jq -r . 00:07:46.390 [2024-12-08 05:06:35.846832] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:46.390 [2024-12-08 05:06:35.846931] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68741 ] 00:07:46.390 [2024-12-08 05:06:35.983359] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.390 [2024-12-08 05:06:36.020330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.771 05:06:37 -- accel/accel.sh@18 -- # out=' 00:07:47.771 SPDK Configuration: 00:07:47.771 Core mask: 0x1 00:07:47.771 00:07:47.771 Accel Perf Configuration: 00:07:47.771 Workload Type: dif_verify 00:07:47.771 Vector size: 4096 bytes 00:07:47.771 Transfer size: 4096 bytes 00:07:47.771 Block size: 512 bytes 00:07:47.771 Metadata size: 8 bytes 00:07:47.771 Vector count 1 00:07:47.771 Module: software 00:07:47.771 Queue depth: 32 00:07:47.771 Allocate depth: 32 00:07:47.771 # threads/core: 1 00:07:47.771 Run time: 1 seconds 00:07:47.771 Verify: No 00:07:47.771 00:07:47.771 Running for 1 seconds... 00:07:47.771 00:07:47.771 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:47.771 ------------------------------------------------------------------------------------ 00:07:47.771 0,0 107072/s 424 MiB/s 0 0 00:07:47.771 ==================================================================================== 00:07:47.771 Total 107072/s 418 MiB/s 0 0' 00:07:47.771 05:06:37 -- accel/accel.sh@20 -- # IFS=: 00:07:47.771 05:06:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:47.771 05:06:37 -- accel/accel.sh@20 -- # read -r var val 00:07:47.771 05:06:37 -- accel/accel.sh@12 -- # build_accel_config 00:07:47.771 05:06:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:47.771 05:06:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:47.771 05:06:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.771 05:06:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.771 05:06:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:47.771 05:06:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:47.771 05:06:37 -- accel/accel.sh@41 -- # local IFS=, 00:07:47.771 05:06:37 -- accel/accel.sh@42 -- # jq -r . 00:07:47.771 [2024-12-08 05:06:37.173600] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:47.771 [2024-12-08 05:06:37.173723] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68760 ] 00:07:47.771 [2024-12-08 05:06:37.310101] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.771 [2024-12-08 05:06:37.350058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.771 05:06:37 -- accel/accel.sh@21 -- # val= 00:07:47.771 05:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.771 05:06:37 -- accel/accel.sh@20 -- # IFS=: 00:07:47.771 05:06:37 -- accel/accel.sh@20 -- # read -r var val 00:07:47.771 05:06:37 -- accel/accel.sh@21 -- # val= 00:07:47.771 05:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.771 05:06:37 -- accel/accel.sh@20 -- # IFS=: 00:07:47.771 05:06:37 -- accel/accel.sh@20 -- # read -r var val 00:07:47.771 05:06:37 -- accel/accel.sh@21 -- # val=0x1 00:07:47.771 05:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.771 05:06:37 -- accel/accel.sh@20 -- # IFS=: 00:07:47.771 05:06:37 -- accel/accel.sh@20 -- # read -r var val 00:07:47.771 05:06:37 -- accel/accel.sh@21 -- # val= 00:07:47.771 05:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.771 05:06:37 -- accel/accel.sh@20 -- # IFS=: 00:07:47.771 05:06:37 -- accel/accel.sh@20 -- # read -r var val 00:07:47.771 05:06:37 -- accel/accel.sh@21 -- # val= 00:07:47.771 05:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.771 05:06:37 -- accel/accel.sh@20 -- # IFS=: 00:07:47.771 05:06:37 -- accel/accel.sh@20 -- # read -r var val 00:07:47.771 05:06:37 -- accel/accel.sh@21 -- # val=dif_verify 00:07:47.771 05:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.771 05:06:37 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:07:47.771 05:06:37 -- accel/accel.sh@20 -- # IFS=: 00:07:47.771 05:06:37 -- accel/accel.sh@20 -- # read -r var val 00:07:47.771 05:06:37 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:47.771 05:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.771 05:06:37 -- accel/accel.sh@20 -- # IFS=: 00:07:47.771 05:06:37 -- accel/accel.sh@20 -- # read -r var val 00:07:47.771 05:06:37 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:47.771 05:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.771 05:06:37 -- accel/accel.sh@20 -- # IFS=: 00:07:47.772 05:06:37 -- accel/accel.sh@20 -- # read -r var val 00:07:47.772 05:06:37 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:47.772 05:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.772 05:06:37 -- accel/accel.sh@20 -- # IFS=: 00:07:47.772 05:06:37 -- accel/accel.sh@20 -- # read -r var val 00:07:47.772 05:06:37 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:47.772 05:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.772 05:06:37 -- accel/accel.sh@20 -- # IFS=: 00:07:47.772 05:06:37 -- accel/accel.sh@20 -- # read -r var val 00:07:47.772 05:06:37 -- accel/accel.sh@21 -- # val= 00:07:47.772 05:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.772 05:06:37 -- accel/accel.sh@20 -- # IFS=: 00:07:47.772 05:06:37 -- accel/accel.sh@20 -- # read -r var val 00:07:47.772 05:06:37 -- accel/accel.sh@21 -- # val=software 00:07:47.772 05:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.772 05:06:37 -- accel/accel.sh@23 -- # accel_module=software 00:07:47.772 05:06:37 -- accel/accel.sh@20 -- # IFS=: 00:07:47.772 05:06:37 -- accel/accel.sh@20 -- # read -r var val 00:07:47.772 05:06:37 -- accel/accel.sh@21 -- # val=32 00:07:47.772 05:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.772 05:06:37 -- accel/accel.sh@20 -- # IFS=: 00:07:47.772 05:06:37 -- accel/accel.sh@20 -- # read -r var val 00:07:47.772 05:06:37 -- accel/accel.sh@21 -- # val=32 00:07:47.772 05:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.772 05:06:37 -- accel/accel.sh@20 -- # IFS=: 00:07:47.772 05:06:37 -- accel/accel.sh@20 -- # read -r var val 00:07:47.772 05:06:37 -- accel/accel.sh@21 -- # val=1 00:07:47.772 05:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.772 05:06:37 -- accel/accel.sh@20 -- # IFS=: 00:07:47.772 05:06:37 -- accel/accel.sh@20 -- # read -r var val 00:07:47.772 05:06:37 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:47.772 05:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.772 05:06:37 -- accel/accel.sh@20 -- # IFS=: 00:07:47.772 05:06:37 -- accel/accel.sh@20 -- # read -r var val 00:07:47.772 05:06:37 -- accel/accel.sh@21 -- # val=No 00:07:47.772 05:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.772 05:06:37 -- accel/accel.sh@20 -- # IFS=: 00:07:47.772 05:06:37 -- accel/accel.sh@20 -- # read -r var val 00:07:47.772 05:06:37 -- accel/accel.sh@21 -- # val= 00:07:47.772 05:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.772 05:06:37 -- accel/accel.sh@20 -- # IFS=: 00:07:47.772 05:06:37 -- accel/accel.sh@20 -- # read -r var val 00:07:47.772 05:06:37 -- accel/accel.sh@21 -- # val= 00:07:47.772 05:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.772 05:06:37 -- accel/accel.sh@20 -- # IFS=: 00:07:47.772 05:06:37 -- accel/accel.sh@20 -- # read -r var val 00:07:48.710 05:06:38 -- accel/accel.sh@21 -- # val= 00:07:48.710 05:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.710 05:06:38 -- accel/accel.sh@20 -- # IFS=: 00:07:48.710 05:06:38 -- accel/accel.sh@20 -- # read -r var val 00:07:48.710 05:06:38 -- accel/accel.sh@21 -- # val= 00:07:48.710 05:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.710 05:06:38 -- accel/accel.sh@20 -- # IFS=: 00:07:48.710 05:06:38 -- accel/accel.sh@20 -- # read -r var val 00:07:48.710 05:06:38 -- accel/accel.sh@21 -- # val= 00:07:48.710 05:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.710 05:06:38 -- accel/accel.sh@20 -- # IFS=: 00:07:48.710 05:06:38 -- accel/accel.sh@20 -- # read -r var val 00:07:48.710 05:06:38 -- accel/accel.sh@21 -- # val= 00:07:48.710 05:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.710 05:06:38 -- accel/accel.sh@20 -- # IFS=: 00:07:48.710 05:06:38 -- accel/accel.sh@20 -- # read -r var val 00:07:48.710 05:06:38 -- accel/accel.sh@21 -- # val= 00:07:48.710 05:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.710 05:06:38 -- accel/accel.sh@20 -- # IFS=: 00:07:48.710 05:06:38 -- accel/accel.sh@20 -- # read -r var val 00:07:48.710 05:06:38 -- accel/accel.sh@21 -- # val= 00:07:48.710 05:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.710 05:06:38 -- accel/accel.sh@20 -- # IFS=: 00:07:48.710 05:06:38 -- accel/accel.sh@20 -- # read -r var val 00:07:48.710 05:06:38 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:48.710 05:06:38 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:07:48.710 05:06:38 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:48.710 00:07:48.710 real 0m2.666s 00:07:48.710 user 0m2.300s 00:07:48.710 sys 0m0.161s 00:07:48.711 ************************************ 00:07:48.711 END TEST accel_dif_verify 00:07:48.711 ************************************ 00:07:48.711 05:06:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:48.711 05:06:38 -- common/autotest_common.sh@10 -- # set +x 00:07:48.971 05:06:38 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:48.971 05:06:38 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:48.971 05:06:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:48.971 05:06:38 -- common/autotest_common.sh@10 -- # set +x 00:07:48.971 ************************************ 00:07:48.971 START TEST accel_dif_generate 00:07:48.971 ************************************ 00:07:48.971 05:06:38 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:07:48.971 05:06:38 -- accel/accel.sh@16 -- # local accel_opc 00:07:48.971 05:06:38 -- accel/accel.sh@17 -- # local accel_module 00:07:48.971 05:06:38 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:07:48.971 05:06:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:48.971 05:06:38 -- accel/accel.sh@12 -- # build_accel_config 00:07:48.971 05:06:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:48.971 05:06:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:48.971 05:06:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:48.971 05:06:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:48.971 05:06:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:48.971 05:06:38 -- accel/accel.sh@41 -- # local IFS=, 00:07:48.971 05:06:38 -- accel/accel.sh@42 -- # jq -r . 00:07:48.971 [2024-12-08 05:06:38.568522] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:48.971 [2024-12-08 05:06:38.568609] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68795 ] 00:07:48.971 [2024-12-08 05:06:38.705595] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.971 [2024-12-08 05:06:38.740711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.351 05:06:39 -- accel/accel.sh@18 -- # out=' 00:07:50.351 SPDK Configuration: 00:07:50.351 Core mask: 0x1 00:07:50.351 00:07:50.351 Accel Perf Configuration: 00:07:50.351 Workload Type: dif_generate 00:07:50.351 Vector size: 4096 bytes 00:07:50.351 Transfer size: 4096 bytes 00:07:50.351 Block size: 512 bytes 00:07:50.351 Metadata size: 8 bytes 00:07:50.351 Vector count 1 00:07:50.351 Module: software 00:07:50.351 Queue depth: 32 00:07:50.351 Allocate depth: 32 00:07:50.351 # threads/core: 1 00:07:50.351 Run time: 1 seconds 00:07:50.351 Verify: No 00:07:50.351 00:07:50.351 Running for 1 seconds... 00:07:50.351 00:07:50.351 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:50.351 ------------------------------------------------------------------------------------ 00:07:50.351 0,0 126848/s 503 MiB/s 0 0 00:07:50.351 ==================================================================================== 00:07:50.351 Total 126848/s 495 MiB/s 0 0' 00:07:50.351 05:06:39 -- accel/accel.sh@20 -- # IFS=: 00:07:50.351 05:06:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:50.351 05:06:39 -- accel/accel.sh@20 -- # read -r var val 00:07:50.351 05:06:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:50.351 05:06:39 -- accel/accel.sh@12 -- # build_accel_config 00:07:50.351 05:06:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:50.351 05:06:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:50.351 05:06:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:50.351 05:06:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:50.351 05:06:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:50.351 05:06:39 -- accel/accel.sh@41 -- # local IFS=, 00:07:50.351 05:06:39 -- accel/accel.sh@42 -- # jq -r . 00:07:50.351 [2024-12-08 05:06:39.896646] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:50.351 [2024-12-08 05:06:39.896772] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68809 ] 00:07:50.351 [2024-12-08 05:06:40.033070] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.351 [2024-12-08 05:06:40.067369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.351 05:06:40 -- accel/accel.sh@21 -- # val= 00:07:50.351 05:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.351 05:06:40 -- accel/accel.sh@20 -- # IFS=: 00:07:50.351 05:06:40 -- accel/accel.sh@20 -- # read -r var val 00:07:50.351 05:06:40 -- accel/accel.sh@21 -- # val= 00:07:50.351 05:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.351 05:06:40 -- accel/accel.sh@20 -- # IFS=: 00:07:50.351 05:06:40 -- accel/accel.sh@20 -- # read -r var val 00:07:50.351 05:06:40 -- accel/accel.sh@21 -- # val=0x1 00:07:50.351 05:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.351 05:06:40 -- accel/accel.sh@20 -- # IFS=: 00:07:50.351 05:06:40 -- accel/accel.sh@20 -- # read -r var val 00:07:50.351 05:06:40 -- accel/accel.sh@21 -- # val= 00:07:50.351 05:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.351 05:06:40 -- accel/accel.sh@20 -- # IFS=: 00:07:50.351 05:06:40 -- accel/accel.sh@20 -- # read -r var val 00:07:50.351 05:06:40 -- accel/accel.sh@21 -- # val= 00:07:50.351 05:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.351 05:06:40 -- accel/accel.sh@20 -- # IFS=: 00:07:50.351 05:06:40 -- accel/accel.sh@20 -- # read -r var val 00:07:50.351 05:06:40 -- accel/accel.sh@21 -- # val=dif_generate 00:07:50.351 05:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.351 05:06:40 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:07:50.351 05:06:40 -- accel/accel.sh@20 -- # IFS=: 00:07:50.351 05:06:40 -- accel/accel.sh@20 -- # read -r var val 00:07:50.351 05:06:40 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:50.351 05:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.351 05:06:40 -- accel/accel.sh@20 -- # IFS=: 00:07:50.351 05:06:40 -- accel/accel.sh@20 -- # read -r var val 00:07:50.351 05:06:40 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:50.351 05:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.351 05:06:40 -- accel/accel.sh@20 -- # IFS=: 00:07:50.351 05:06:40 -- accel/accel.sh@20 -- # read -r var val 00:07:50.351 05:06:40 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:50.351 05:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.351 05:06:40 -- accel/accel.sh@20 -- # IFS=: 00:07:50.351 05:06:40 -- accel/accel.sh@20 -- # read -r var val 00:07:50.351 05:06:40 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:50.351 05:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.351 05:06:40 -- accel/accel.sh@20 -- # IFS=: 00:07:50.351 05:06:40 -- accel/accel.sh@20 -- # read -r var val 00:07:50.351 05:06:40 -- accel/accel.sh@21 -- # val= 00:07:50.351 05:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.351 05:06:40 -- accel/accel.sh@20 -- # IFS=: 00:07:50.351 05:06:40 -- accel/accel.sh@20 -- # read -r var val 00:07:50.351 05:06:40 -- accel/accel.sh@21 -- # val=software 00:07:50.351 05:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.351 05:06:40 -- accel/accel.sh@23 -- # accel_module=software 00:07:50.351 05:06:40 -- accel/accel.sh@20 -- # IFS=: 00:07:50.351 05:06:40 -- accel/accel.sh@20 -- # read -r var val 00:07:50.351 05:06:40 -- accel/accel.sh@21 -- # val=32 00:07:50.351 05:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.351 05:06:40 -- accel/accel.sh@20 -- # IFS=: 00:07:50.351 05:06:40 -- accel/accel.sh@20 -- # read -r var val 00:07:50.351 05:06:40 -- accel/accel.sh@21 -- # val=32 00:07:50.351 05:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.351 05:06:40 -- accel/accel.sh@20 -- # IFS=: 00:07:50.351 05:06:40 -- accel/accel.sh@20 -- # read -r var val 00:07:50.351 05:06:40 -- accel/accel.sh@21 -- # val=1 00:07:50.351 05:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.351 05:06:40 -- accel/accel.sh@20 -- # IFS=: 00:07:50.351 05:06:40 -- accel/accel.sh@20 -- # read -r var val 00:07:50.351 05:06:40 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:50.351 05:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.351 05:06:40 -- accel/accel.sh@20 -- # IFS=: 00:07:50.351 05:06:40 -- accel/accel.sh@20 -- # read -r var val 00:07:50.351 05:06:40 -- accel/accel.sh@21 -- # val=No 00:07:50.351 05:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.351 05:06:40 -- accel/accel.sh@20 -- # IFS=: 00:07:50.351 05:06:40 -- accel/accel.sh@20 -- # read -r var val 00:07:50.351 05:06:40 -- accel/accel.sh@21 -- # val= 00:07:50.351 05:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.351 05:06:40 -- accel/accel.sh@20 -- # IFS=: 00:07:50.351 05:06:40 -- accel/accel.sh@20 -- # read -r var val 00:07:50.351 05:06:40 -- accel/accel.sh@21 -- # val= 00:07:50.351 05:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.351 05:06:40 -- accel/accel.sh@20 -- # IFS=: 00:07:50.351 05:06:40 -- accel/accel.sh@20 -- # read -r var val 00:07:51.731 05:06:41 -- accel/accel.sh@21 -- # val= 00:07:51.731 05:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.731 05:06:41 -- accel/accel.sh@20 -- # IFS=: 00:07:51.731 05:06:41 -- accel/accel.sh@20 -- # read -r var val 00:07:51.731 05:06:41 -- accel/accel.sh@21 -- # val= 00:07:51.731 05:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.731 05:06:41 -- accel/accel.sh@20 -- # IFS=: 00:07:51.731 05:06:41 -- accel/accel.sh@20 -- # read -r var val 00:07:51.731 05:06:41 -- accel/accel.sh@21 -- # val= 00:07:51.731 05:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.731 05:06:41 -- accel/accel.sh@20 -- # IFS=: 00:07:51.731 05:06:41 -- accel/accel.sh@20 -- # read -r var val 00:07:51.731 05:06:41 -- accel/accel.sh@21 -- # val= 00:07:51.731 05:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.731 05:06:41 -- accel/accel.sh@20 -- # IFS=: 00:07:51.731 05:06:41 -- accel/accel.sh@20 -- # read -r var val 00:07:51.731 05:06:41 -- accel/accel.sh@21 -- # val= 00:07:51.731 05:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.731 05:06:41 -- accel/accel.sh@20 -- # IFS=: 00:07:51.731 05:06:41 -- accel/accel.sh@20 -- # read -r var val 00:07:51.731 05:06:41 -- accel/accel.sh@21 -- # val= 00:07:51.731 05:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.731 05:06:41 -- accel/accel.sh@20 -- # IFS=: 00:07:51.731 05:06:41 -- accel/accel.sh@20 -- # read -r var val 00:07:51.731 05:06:41 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:51.731 05:06:41 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:51.731 05:06:41 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:51.731 00:07:51.731 real 0m2.660s 00:07:51.731 user 0m2.292s 00:07:51.731 sys 0m0.162s 00:07:51.731 05:06:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:51.731 ************************************ 00:07:51.731 END TEST accel_dif_generate 00:07:51.731 ************************************ 00:07:51.731 05:06:41 -- common/autotest_common.sh@10 -- # set +x 00:07:51.731 05:06:41 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:51.731 05:06:41 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:51.731 05:06:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:51.731 05:06:41 -- common/autotest_common.sh@10 -- # set +x 00:07:51.731 ************************************ 00:07:51.731 START TEST accel_dif_generate_copy 00:07:51.731 ************************************ 00:07:51.731 05:06:41 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:07:51.731 05:06:41 -- accel/accel.sh@16 -- # local accel_opc 00:07:51.731 05:06:41 -- accel/accel.sh@17 -- # local accel_module 00:07:51.731 05:06:41 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:51.731 05:06:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:51.731 05:06:41 -- accel/accel.sh@12 -- # build_accel_config 00:07:51.731 05:06:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:51.731 05:06:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:51.731 05:06:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:51.731 05:06:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:51.731 05:06:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:51.731 05:06:41 -- accel/accel.sh@41 -- # local IFS=, 00:07:51.731 05:06:41 -- accel/accel.sh@42 -- # jq -r . 00:07:51.731 [2024-12-08 05:06:41.275169] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:51.731 [2024-12-08 05:06:41.275418] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68843 ] 00:07:51.731 [2024-12-08 05:06:41.413919] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.731 [2024-12-08 05:06:41.448012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.111 05:06:42 -- accel/accel.sh@18 -- # out=' 00:07:53.111 SPDK Configuration: 00:07:53.111 Core mask: 0x1 00:07:53.111 00:07:53.111 Accel Perf Configuration: 00:07:53.111 Workload Type: dif_generate_copy 00:07:53.111 Vector size: 4096 bytes 00:07:53.111 Transfer size: 4096 bytes 00:07:53.111 Vector count 1 00:07:53.111 Module: software 00:07:53.111 Queue depth: 32 00:07:53.111 Allocate depth: 32 00:07:53.111 # threads/core: 1 00:07:53.111 Run time: 1 seconds 00:07:53.111 Verify: No 00:07:53.111 00:07:53.111 Running for 1 seconds... 00:07:53.111 00:07:53.111 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:53.111 ------------------------------------------------------------------------------------ 00:07:53.111 0,0 87968/s 348 MiB/s 0 0 00:07:53.111 ==================================================================================== 00:07:53.111 Total 87968/s 343 MiB/s 0 0' 00:07:53.111 05:06:42 -- accel/accel.sh@20 -- # IFS=: 00:07:53.111 05:06:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:53.111 05:06:42 -- accel/accel.sh@20 -- # read -r var val 00:07:53.111 05:06:42 -- accel/accel.sh@12 -- # build_accel_config 00:07:53.111 05:06:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:53.111 05:06:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:53.111 05:06:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:53.111 05:06:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:53.111 05:06:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:53.111 05:06:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:53.111 05:06:42 -- accel/accel.sh@41 -- # local IFS=, 00:07:53.111 05:06:42 -- accel/accel.sh@42 -- # jq -r . 00:07:53.111 [2024-12-08 05:06:42.613914] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:53.111 [2024-12-08 05:06:42.614021] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68863 ] 00:07:53.111 [2024-12-08 05:06:42.748180] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.111 [2024-12-08 05:06:42.786899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.111 05:06:42 -- accel/accel.sh@21 -- # val= 00:07:53.111 05:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.111 05:06:42 -- accel/accel.sh@20 -- # IFS=: 00:07:53.111 05:06:42 -- accel/accel.sh@20 -- # read -r var val 00:07:53.111 05:06:42 -- accel/accel.sh@21 -- # val= 00:07:53.111 05:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.111 05:06:42 -- accel/accel.sh@20 -- # IFS=: 00:07:53.111 05:06:42 -- accel/accel.sh@20 -- # read -r var val 00:07:53.111 05:06:42 -- accel/accel.sh@21 -- # val=0x1 00:07:53.111 05:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.111 05:06:42 -- accel/accel.sh@20 -- # IFS=: 00:07:53.111 05:06:42 -- accel/accel.sh@20 -- # read -r var val 00:07:53.111 05:06:42 -- accel/accel.sh@21 -- # val= 00:07:53.111 05:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.111 05:06:42 -- accel/accel.sh@20 -- # IFS=: 00:07:53.111 05:06:42 -- accel/accel.sh@20 -- # read -r var val 00:07:53.111 05:06:42 -- accel/accel.sh@21 -- # val= 00:07:53.111 05:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.111 05:06:42 -- accel/accel.sh@20 -- # IFS=: 00:07:53.111 05:06:42 -- accel/accel.sh@20 -- # read -r var val 00:07:53.111 05:06:42 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:53.111 05:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.111 05:06:42 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:53.111 05:06:42 -- accel/accel.sh@20 -- # IFS=: 00:07:53.111 05:06:42 -- accel/accel.sh@20 -- # read -r var val 00:07:53.111 05:06:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:53.111 05:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.111 05:06:42 -- accel/accel.sh@20 -- # IFS=: 00:07:53.111 05:06:42 -- accel/accel.sh@20 -- # read -r var val 00:07:53.111 05:06:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:53.111 05:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.111 05:06:42 -- accel/accel.sh@20 -- # IFS=: 00:07:53.111 05:06:42 -- accel/accel.sh@20 -- # read -r var val 00:07:53.111 05:06:42 -- accel/accel.sh@21 -- # val= 00:07:53.111 05:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.112 05:06:42 -- accel/accel.sh@20 -- # IFS=: 00:07:53.112 05:06:42 -- accel/accel.sh@20 -- # read -r var val 00:07:53.112 05:06:42 -- accel/accel.sh@21 -- # val=software 00:07:53.112 05:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.112 05:06:42 -- accel/accel.sh@23 -- # accel_module=software 00:07:53.112 05:06:42 -- accel/accel.sh@20 -- # IFS=: 00:07:53.112 05:06:42 -- accel/accel.sh@20 -- # read -r var val 00:07:53.112 05:06:42 -- accel/accel.sh@21 -- # val=32 00:07:53.112 05:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.112 05:06:42 -- accel/accel.sh@20 -- # IFS=: 00:07:53.112 05:06:42 -- accel/accel.sh@20 -- # read -r var val 00:07:53.112 05:06:42 -- accel/accel.sh@21 -- # val=32 00:07:53.112 05:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.112 05:06:42 -- accel/accel.sh@20 -- # IFS=: 00:07:53.112 05:06:42 -- accel/accel.sh@20 -- # read -r var val 00:07:53.112 05:06:42 -- accel/accel.sh@21 -- # val=1 00:07:53.112 05:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.112 05:06:42 -- accel/accel.sh@20 -- # IFS=: 00:07:53.112 05:06:42 -- accel/accel.sh@20 -- # read -r var val 00:07:53.112 05:06:42 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:53.112 05:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.112 05:06:42 -- accel/accel.sh@20 -- # IFS=: 00:07:53.112 05:06:42 -- accel/accel.sh@20 -- # read -r var val 00:07:53.112 05:06:42 -- accel/accel.sh@21 -- # val=No 00:07:53.112 05:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.112 05:06:42 -- accel/accel.sh@20 -- # IFS=: 00:07:53.112 05:06:42 -- accel/accel.sh@20 -- # read -r var val 00:07:53.112 05:06:42 -- accel/accel.sh@21 -- # val= 00:07:53.112 05:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.112 05:06:42 -- accel/accel.sh@20 -- # IFS=: 00:07:53.112 05:06:42 -- accel/accel.sh@20 -- # read -r var val 00:07:53.112 05:06:42 -- accel/accel.sh@21 -- # val= 00:07:53.112 05:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.112 05:06:42 -- accel/accel.sh@20 -- # IFS=: 00:07:53.112 05:06:42 -- accel/accel.sh@20 -- # read -r var val 00:07:54.569 05:06:43 -- accel/accel.sh@21 -- # val= 00:07:54.569 05:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.569 05:06:43 -- accel/accel.sh@20 -- # IFS=: 00:07:54.569 05:06:43 -- accel/accel.sh@20 -- # read -r var val 00:07:54.569 05:06:43 -- accel/accel.sh@21 -- # val= 00:07:54.569 05:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.569 05:06:43 -- accel/accel.sh@20 -- # IFS=: 00:07:54.569 05:06:43 -- accel/accel.sh@20 -- # read -r var val 00:07:54.569 05:06:43 -- accel/accel.sh@21 -- # val= 00:07:54.569 05:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.569 05:06:43 -- accel/accel.sh@20 -- # IFS=: 00:07:54.569 05:06:43 -- accel/accel.sh@20 -- # read -r var val 00:07:54.569 05:06:43 -- accel/accel.sh@21 -- # val= 00:07:54.569 05:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.569 05:06:43 -- accel/accel.sh@20 -- # IFS=: 00:07:54.569 05:06:43 -- accel/accel.sh@20 -- # read -r var val 00:07:54.569 05:06:43 -- accel/accel.sh@21 -- # val= 00:07:54.569 05:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.569 05:06:43 -- accel/accel.sh@20 -- # IFS=: 00:07:54.569 05:06:43 -- accel/accel.sh@20 -- # read -r var val 00:07:54.569 05:06:43 -- accel/accel.sh@21 -- # val= 00:07:54.569 05:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.569 05:06:43 -- accel/accel.sh@20 -- # IFS=: 00:07:54.569 ************************************ 00:07:54.569 END TEST accel_dif_generate_copy 00:07:54.569 ************************************ 00:07:54.569 05:06:43 -- accel/accel.sh@20 -- # read -r var val 00:07:54.569 05:06:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:54.569 05:06:43 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:54.569 05:06:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:54.569 00:07:54.569 real 0m2.677s 00:07:54.569 user 0m2.314s 00:07:54.569 sys 0m0.152s 00:07:54.569 05:06:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:54.569 05:06:43 -- common/autotest_common.sh@10 -- # set +x 00:07:54.569 05:06:43 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:54.569 05:06:43 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:54.569 05:06:43 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:54.569 05:06:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:54.569 05:06:43 -- common/autotest_common.sh@10 -- # set +x 00:07:54.569 ************************************ 00:07:54.569 START TEST accel_comp 00:07:54.569 ************************************ 00:07:54.569 05:06:43 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:54.569 05:06:43 -- accel/accel.sh@16 -- # local accel_opc 00:07:54.569 05:06:43 -- accel/accel.sh@17 -- # local accel_module 00:07:54.569 05:06:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:54.569 05:06:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:54.569 05:06:43 -- accel/accel.sh@12 -- # build_accel_config 00:07:54.569 05:06:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:54.569 05:06:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:54.569 05:06:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:54.569 05:06:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:54.569 05:06:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:54.569 05:06:43 -- accel/accel.sh@41 -- # local IFS=, 00:07:54.569 05:06:43 -- accel/accel.sh@42 -- # jq -r . 00:07:54.569 [2024-12-08 05:06:44.003875] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:54.569 [2024-12-08 05:06:44.003946] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68892 ] 00:07:54.569 [2024-12-08 05:06:44.133947] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.569 [2024-12-08 05:06:44.168173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.540 05:06:45 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:55.540 00:07:55.540 SPDK Configuration: 00:07:55.540 Core mask: 0x1 00:07:55.540 00:07:55.540 Accel Perf Configuration: 00:07:55.540 Workload Type: compress 00:07:55.540 Transfer size: 4096 bytes 00:07:55.540 Vector count 1 00:07:55.540 Module: software 00:07:55.540 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:55.540 Queue depth: 32 00:07:55.540 Allocate depth: 32 00:07:55.540 # threads/core: 1 00:07:55.540 Run time: 1 seconds 00:07:55.540 Verify: No 00:07:55.540 00:07:55.540 Running for 1 seconds... 00:07:55.540 00:07:55.540 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:55.540 ------------------------------------------------------------------------------------ 00:07:55.540 0,0 47136/s 196 MiB/s 0 0 00:07:55.540 ==================================================================================== 00:07:55.540 Total 47136/s 184 MiB/s 0 0' 00:07:55.540 05:06:45 -- accel/accel.sh@20 -- # IFS=: 00:07:55.540 05:06:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:55.540 05:06:45 -- accel/accel.sh@20 -- # read -r var val 00:07:55.540 05:06:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:55.540 05:06:45 -- accel/accel.sh@12 -- # build_accel_config 00:07:55.540 05:06:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:55.540 05:06:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:55.540 05:06:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:55.540 05:06:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:55.540 05:06:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:55.540 05:06:45 -- accel/accel.sh@41 -- # local IFS=, 00:07:55.540 05:06:45 -- accel/accel.sh@42 -- # jq -r . 00:07:55.798 [2024-12-08 05:06:45.330585] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:55.798 [2024-12-08 05:06:45.330703] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68906 ] 00:07:55.798 [2024-12-08 05:06:45.468603] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.798 [2024-12-08 05:06:45.506890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.798 05:06:45 -- accel/accel.sh@21 -- # val= 00:07:55.798 05:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.798 05:06:45 -- accel/accel.sh@20 -- # IFS=: 00:07:55.798 05:06:45 -- accel/accel.sh@20 -- # read -r var val 00:07:55.798 05:06:45 -- accel/accel.sh@21 -- # val= 00:07:55.798 05:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.798 05:06:45 -- accel/accel.sh@20 -- # IFS=: 00:07:55.798 05:06:45 -- accel/accel.sh@20 -- # read -r var val 00:07:55.798 05:06:45 -- accel/accel.sh@21 -- # val= 00:07:55.798 05:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.798 05:06:45 -- accel/accel.sh@20 -- # IFS=: 00:07:55.798 05:06:45 -- accel/accel.sh@20 -- # read -r var val 00:07:55.798 05:06:45 -- accel/accel.sh@21 -- # val=0x1 00:07:55.798 05:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.798 05:06:45 -- accel/accel.sh@20 -- # IFS=: 00:07:55.798 05:06:45 -- accel/accel.sh@20 -- # read -r var val 00:07:55.798 05:06:45 -- accel/accel.sh@21 -- # val= 00:07:55.798 05:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.798 05:06:45 -- accel/accel.sh@20 -- # IFS=: 00:07:55.798 05:06:45 -- accel/accel.sh@20 -- # read -r var val 00:07:55.798 05:06:45 -- accel/accel.sh@21 -- # val= 00:07:55.798 05:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.798 05:06:45 -- accel/accel.sh@20 -- # IFS=: 00:07:55.798 05:06:45 -- accel/accel.sh@20 -- # read -r var val 00:07:55.798 05:06:45 -- accel/accel.sh@21 -- # val=compress 00:07:55.798 05:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.798 05:06:45 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:55.798 05:06:45 -- accel/accel.sh@20 -- # IFS=: 00:07:55.798 05:06:45 -- accel/accel.sh@20 -- # read -r var val 00:07:55.798 05:06:45 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:55.798 05:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.798 05:06:45 -- accel/accel.sh@20 -- # IFS=: 00:07:55.798 05:06:45 -- accel/accel.sh@20 -- # read -r var val 00:07:55.798 05:06:45 -- accel/accel.sh@21 -- # val= 00:07:55.798 05:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.798 05:06:45 -- accel/accel.sh@20 -- # IFS=: 00:07:55.798 05:06:45 -- accel/accel.sh@20 -- # read -r var val 00:07:55.798 05:06:45 -- accel/accel.sh@21 -- # val=software 00:07:55.798 05:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.798 05:06:45 -- accel/accel.sh@23 -- # accel_module=software 00:07:55.798 05:06:45 -- accel/accel.sh@20 -- # IFS=: 00:07:55.798 05:06:45 -- accel/accel.sh@20 -- # read -r var val 00:07:55.798 05:06:45 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:55.798 05:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.798 05:06:45 -- accel/accel.sh@20 -- # IFS=: 00:07:55.798 05:06:45 -- accel/accel.sh@20 -- # read -r var val 00:07:55.798 05:06:45 -- accel/accel.sh@21 -- # val=32 00:07:55.798 05:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.798 05:06:45 -- accel/accel.sh@20 -- # IFS=: 00:07:55.798 05:06:45 -- accel/accel.sh@20 -- # read -r var val 00:07:55.798 05:06:45 -- accel/accel.sh@21 -- # val=32 00:07:55.798 05:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.798 05:06:45 -- accel/accel.sh@20 -- # IFS=: 00:07:55.798 05:06:45 -- accel/accel.sh@20 -- # read -r var val 00:07:55.798 05:06:45 -- accel/accel.sh@21 -- # val=1 00:07:55.798 05:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.798 05:06:45 -- accel/accel.sh@20 -- # IFS=: 00:07:55.798 05:06:45 -- accel/accel.sh@20 -- # read -r var val 00:07:55.798 05:06:45 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:55.798 05:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.798 05:06:45 -- accel/accel.sh@20 -- # IFS=: 00:07:55.798 05:06:45 -- accel/accel.sh@20 -- # read -r var val 00:07:55.798 05:06:45 -- accel/accel.sh@21 -- # val=No 00:07:55.798 05:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.798 05:06:45 -- accel/accel.sh@20 -- # IFS=: 00:07:55.798 05:06:45 -- accel/accel.sh@20 -- # read -r var val 00:07:55.798 05:06:45 -- accel/accel.sh@21 -- # val= 00:07:55.798 05:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.798 05:06:45 -- accel/accel.sh@20 -- # IFS=: 00:07:55.798 05:06:45 -- accel/accel.sh@20 -- # read -r var val 00:07:55.798 05:06:45 -- accel/accel.sh@21 -- # val= 00:07:55.798 05:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.798 05:06:45 -- accel/accel.sh@20 -- # IFS=: 00:07:55.798 05:06:45 -- accel/accel.sh@20 -- # read -r var val 00:07:57.193 05:06:46 -- accel/accel.sh@21 -- # val= 00:07:57.193 05:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.193 05:06:46 -- accel/accel.sh@20 -- # IFS=: 00:07:57.193 05:06:46 -- accel/accel.sh@20 -- # read -r var val 00:07:57.193 05:06:46 -- accel/accel.sh@21 -- # val= 00:07:57.193 05:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.193 05:06:46 -- accel/accel.sh@20 -- # IFS=: 00:07:57.193 05:06:46 -- accel/accel.sh@20 -- # read -r var val 00:07:57.193 05:06:46 -- accel/accel.sh@21 -- # val= 00:07:57.193 05:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.193 05:06:46 -- accel/accel.sh@20 -- # IFS=: 00:07:57.193 05:06:46 -- accel/accel.sh@20 -- # read -r var val 00:07:57.193 05:06:46 -- accel/accel.sh@21 -- # val= 00:07:57.193 05:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.193 05:06:46 -- accel/accel.sh@20 -- # IFS=: 00:07:57.193 05:06:46 -- accel/accel.sh@20 -- # read -r var val 00:07:57.193 05:06:46 -- accel/accel.sh@21 -- # val= 00:07:57.193 05:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.193 05:06:46 -- accel/accel.sh@20 -- # IFS=: 00:07:57.193 05:06:46 -- accel/accel.sh@20 -- # read -r var val 00:07:57.193 05:06:46 -- accel/accel.sh@21 -- # val= 00:07:57.193 05:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.193 05:06:46 -- accel/accel.sh@20 -- # IFS=: 00:07:57.193 05:06:46 -- accel/accel.sh@20 -- # read -r var val 00:07:57.193 05:06:46 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:57.193 05:06:46 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:57.193 05:06:46 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:57.193 00:07:57.193 real 0m2.665s 00:07:57.193 user 0m2.296s 00:07:57.193 sys 0m0.161s 00:07:57.193 05:06:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:57.193 05:06:46 -- common/autotest_common.sh@10 -- # set +x 00:07:57.193 ************************************ 00:07:57.193 END TEST accel_comp 00:07:57.193 ************************************ 00:07:57.193 05:06:46 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:57.193 05:06:46 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:57.193 05:06:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:57.193 05:06:46 -- common/autotest_common.sh@10 -- # set +x 00:07:57.193 ************************************ 00:07:57.193 START TEST accel_decomp 00:07:57.193 ************************************ 00:07:57.193 05:06:46 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:57.193 05:06:46 -- accel/accel.sh@16 -- # local accel_opc 00:07:57.193 05:06:46 -- accel/accel.sh@17 -- # local accel_module 00:07:57.193 05:06:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:57.193 05:06:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:57.193 05:06:46 -- accel/accel.sh@12 -- # build_accel_config 00:07:57.193 05:06:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:57.193 05:06:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:57.193 05:06:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:57.193 05:06:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:57.193 05:06:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:57.193 05:06:46 -- accel/accel.sh@41 -- # local IFS=, 00:07:57.193 05:06:46 -- accel/accel.sh@42 -- # jq -r . 00:07:57.193 [2024-12-08 05:06:46.723256] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:57.193 [2024-12-08 05:06:46.723356] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68946 ] 00:07:57.193 [2024-12-08 05:06:46.853129] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.193 [2024-12-08 05:06:46.888432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.572 05:06:48 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:58.572 00:07:58.572 SPDK Configuration: 00:07:58.572 Core mask: 0x1 00:07:58.572 00:07:58.572 Accel Perf Configuration: 00:07:58.572 Workload Type: decompress 00:07:58.572 Transfer size: 4096 bytes 00:07:58.572 Vector count 1 00:07:58.572 Module: software 00:07:58.572 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:58.572 Queue depth: 32 00:07:58.572 Allocate depth: 32 00:07:58.572 # threads/core: 1 00:07:58.572 Run time: 1 seconds 00:07:58.572 Verify: Yes 00:07:58.572 00:07:58.572 Running for 1 seconds... 00:07:58.572 00:07:58.572 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:58.572 ------------------------------------------------------------------------------------ 00:07:58.572 0,0 64992/s 119 MiB/s 0 0 00:07:58.572 ==================================================================================== 00:07:58.572 Total 64992/s 253 MiB/s 0 0' 00:07:58.572 05:06:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:58.572 05:06:48 -- accel/accel.sh@20 -- # IFS=: 00:07:58.572 05:06:48 -- accel/accel.sh@20 -- # read -r var val 00:07:58.572 05:06:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:58.572 05:06:48 -- accel/accel.sh@12 -- # build_accel_config 00:07:58.572 05:06:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:58.572 05:06:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:58.572 05:06:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:58.572 05:06:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:58.572 05:06:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:58.572 05:06:48 -- accel/accel.sh@41 -- # local IFS=, 00:07:58.572 05:06:48 -- accel/accel.sh@42 -- # jq -r . 00:07:58.572 [2024-12-08 05:06:48.047200] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:58.572 [2024-12-08 05:06:48.047470] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68960 ] 00:07:58.572 [2024-12-08 05:06:48.184766] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.572 [2024-12-08 05:06:48.222606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.572 05:06:48 -- accel/accel.sh@21 -- # val= 00:07:58.572 05:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.572 05:06:48 -- accel/accel.sh@20 -- # IFS=: 00:07:58.572 05:06:48 -- accel/accel.sh@20 -- # read -r var val 00:07:58.572 05:06:48 -- accel/accel.sh@21 -- # val= 00:07:58.572 05:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.572 05:06:48 -- accel/accel.sh@20 -- # IFS=: 00:07:58.572 05:06:48 -- accel/accel.sh@20 -- # read -r var val 00:07:58.572 05:06:48 -- accel/accel.sh@21 -- # val= 00:07:58.572 05:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.573 05:06:48 -- accel/accel.sh@20 -- # IFS=: 00:07:58.573 05:06:48 -- accel/accel.sh@20 -- # read -r var val 00:07:58.573 05:06:48 -- accel/accel.sh@21 -- # val=0x1 00:07:58.573 05:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.573 05:06:48 -- accel/accel.sh@20 -- # IFS=: 00:07:58.573 05:06:48 -- accel/accel.sh@20 -- # read -r var val 00:07:58.573 05:06:48 -- accel/accel.sh@21 -- # val= 00:07:58.573 05:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.573 05:06:48 -- accel/accel.sh@20 -- # IFS=: 00:07:58.573 05:06:48 -- accel/accel.sh@20 -- # read -r var val 00:07:58.573 05:06:48 -- accel/accel.sh@21 -- # val= 00:07:58.573 05:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.573 05:06:48 -- accel/accel.sh@20 -- # IFS=: 00:07:58.573 05:06:48 -- accel/accel.sh@20 -- # read -r var val 00:07:58.573 05:06:48 -- accel/accel.sh@21 -- # val=decompress 00:07:58.573 05:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.573 05:06:48 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:58.573 05:06:48 -- accel/accel.sh@20 -- # IFS=: 00:07:58.573 05:06:48 -- accel/accel.sh@20 -- # read -r var val 00:07:58.573 05:06:48 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:58.573 05:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.573 05:06:48 -- accel/accel.sh@20 -- # IFS=: 00:07:58.573 05:06:48 -- accel/accel.sh@20 -- # read -r var val 00:07:58.573 05:06:48 -- accel/accel.sh@21 -- # val= 00:07:58.573 05:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.573 05:06:48 -- accel/accel.sh@20 -- # IFS=: 00:07:58.573 05:06:48 -- accel/accel.sh@20 -- # read -r var val 00:07:58.573 05:06:48 -- accel/accel.sh@21 -- # val=software 00:07:58.573 05:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.573 05:06:48 -- accel/accel.sh@23 -- # accel_module=software 00:07:58.573 05:06:48 -- accel/accel.sh@20 -- # IFS=: 00:07:58.573 05:06:48 -- accel/accel.sh@20 -- # read -r var val 00:07:58.573 05:06:48 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:58.573 05:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.573 05:06:48 -- accel/accel.sh@20 -- # IFS=: 00:07:58.573 05:06:48 -- accel/accel.sh@20 -- # read -r var val 00:07:58.573 05:06:48 -- accel/accel.sh@21 -- # val=32 00:07:58.573 05:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.573 05:06:48 -- accel/accel.sh@20 -- # IFS=: 00:07:58.573 05:06:48 -- accel/accel.sh@20 -- # read -r var val 00:07:58.573 05:06:48 -- accel/accel.sh@21 -- # val=32 00:07:58.573 05:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.573 05:06:48 -- accel/accel.sh@20 -- # IFS=: 00:07:58.573 05:06:48 -- accel/accel.sh@20 -- # read -r var val 00:07:58.573 05:06:48 -- accel/accel.sh@21 -- # val=1 00:07:58.573 05:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.573 05:06:48 -- accel/accel.sh@20 -- # IFS=: 00:07:58.573 05:06:48 -- accel/accel.sh@20 -- # read -r var val 00:07:58.573 05:06:48 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:58.573 05:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.573 05:06:48 -- accel/accel.sh@20 -- # IFS=: 00:07:58.573 05:06:48 -- accel/accel.sh@20 -- # read -r var val 00:07:58.573 05:06:48 -- accel/accel.sh@21 -- # val=Yes 00:07:58.573 05:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.573 05:06:48 -- accel/accel.sh@20 -- # IFS=: 00:07:58.573 05:06:48 -- accel/accel.sh@20 -- # read -r var val 00:07:58.573 05:06:48 -- accel/accel.sh@21 -- # val= 00:07:58.573 05:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.573 05:06:48 -- accel/accel.sh@20 -- # IFS=: 00:07:58.573 05:06:48 -- accel/accel.sh@20 -- # read -r var val 00:07:58.573 05:06:48 -- accel/accel.sh@21 -- # val= 00:07:58.573 05:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.573 05:06:48 -- accel/accel.sh@20 -- # IFS=: 00:07:58.573 05:06:48 -- accel/accel.sh@20 -- # read -r var val 00:07:59.953 05:06:49 -- accel/accel.sh@21 -- # val= 00:07:59.953 05:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.953 05:06:49 -- accel/accel.sh@20 -- # IFS=: 00:07:59.953 05:06:49 -- accel/accel.sh@20 -- # read -r var val 00:07:59.953 05:06:49 -- accel/accel.sh@21 -- # val= 00:07:59.953 05:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.953 05:06:49 -- accel/accel.sh@20 -- # IFS=: 00:07:59.953 05:06:49 -- accel/accel.sh@20 -- # read -r var val 00:07:59.953 05:06:49 -- accel/accel.sh@21 -- # val= 00:07:59.953 05:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.953 05:06:49 -- accel/accel.sh@20 -- # IFS=: 00:07:59.953 05:06:49 -- accel/accel.sh@20 -- # read -r var val 00:07:59.953 05:06:49 -- accel/accel.sh@21 -- # val= 00:07:59.953 05:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.953 05:06:49 -- accel/accel.sh@20 -- # IFS=: 00:07:59.953 05:06:49 -- accel/accel.sh@20 -- # read -r var val 00:07:59.953 05:06:49 -- accel/accel.sh@21 -- # val= 00:07:59.953 05:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.953 05:06:49 -- accel/accel.sh@20 -- # IFS=: 00:07:59.953 05:06:49 -- accel/accel.sh@20 -- # read -r var val 00:07:59.953 05:06:49 -- accel/accel.sh@21 -- # val= 00:07:59.953 ************************************ 00:07:59.953 END TEST accel_decomp 00:07:59.953 ************************************ 00:07:59.953 05:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.953 05:06:49 -- accel/accel.sh@20 -- # IFS=: 00:07:59.953 05:06:49 -- accel/accel.sh@20 -- # read -r var val 00:07:59.953 05:06:49 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:59.953 05:06:49 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:59.953 05:06:49 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:59.953 00:07:59.953 real 0m2.670s 00:07:59.953 user 0m2.302s 00:07:59.953 sys 0m0.163s 00:07:59.953 05:06:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:59.953 05:06:49 -- common/autotest_common.sh@10 -- # set +x 00:07:59.953 05:06:49 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:59.953 05:06:49 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:59.953 05:06:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:59.953 05:06:49 -- common/autotest_common.sh@10 -- # set +x 00:07:59.953 ************************************ 00:07:59.953 START TEST accel_decmop_full 00:07:59.953 ************************************ 00:07:59.953 05:06:49 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:59.953 05:06:49 -- accel/accel.sh@16 -- # local accel_opc 00:07:59.953 05:06:49 -- accel/accel.sh@17 -- # local accel_module 00:07:59.953 05:06:49 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:59.953 05:06:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:59.953 05:06:49 -- accel/accel.sh@12 -- # build_accel_config 00:07:59.953 05:06:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:59.953 05:06:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:59.953 05:06:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:59.953 05:06:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:59.953 05:06:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:59.953 05:06:49 -- accel/accel.sh@41 -- # local IFS=, 00:07:59.953 05:06:49 -- accel/accel.sh@42 -- # jq -r . 00:07:59.953 [2024-12-08 05:06:49.444885] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:59.953 [2024-12-08 05:06:49.445161] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68992 ] 00:07:59.953 [2024-12-08 05:06:49.583287] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.953 [2024-12-08 05:06:49.621233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.332 05:06:50 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:01.332 00:08:01.332 SPDK Configuration: 00:08:01.332 Core mask: 0x1 00:08:01.332 00:08:01.332 Accel Perf Configuration: 00:08:01.332 Workload Type: decompress 00:08:01.332 Transfer size: 111250 bytes 00:08:01.332 Vector count 1 00:08:01.332 Module: software 00:08:01.332 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:01.332 Queue depth: 32 00:08:01.332 Allocate depth: 32 00:08:01.332 # threads/core: 1 00:08:01.332 Run time: 1 seconds 00:08:01.332 Verify: Yes 00:08:01.332 00:08:01.332 Running for 1 seconds... 00:08:01.332 00:08:01.332 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:01.332 ------------------------------------------------------------------------------------ 00:08:01.332 0,0 4320/s 178 MiB/s 0 0 00:08:01.332 ==================================================================================== 00:08:01.332 Total 4320/s 458 MiB/s 0 0' 00:08:01.332 05:06:50 -- accel/accel.sh@20 -- # IFS=: 00:08:01.332 05:06:50 -- accel/accel.sh@20 -- # read -r var val 00:08:01.332 05:06:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:01.332 05:06:50 -- accel/accel.sh@12 -- # build_accel_config 00:08:01.332 05:06:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:01.332 05:06:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:01.332 05:06:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:01.332 05:06:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:01.332 05:06:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:01.332 05:06:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:01.332 05:06:50 -- accel/accel.sh@41 -- # local IFS=, 00:08:01.332 05:06:50 -- accel/accel.sh@42 -- # jq -r . 00:08:01.332 [2024-12-08 05:06:50.798911] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:01.332 [2024-12-08 05:06:50.799005] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69017 ] 00:08:01.332 [2024-12-08 05:06:50.935686] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.332 [2024-12-08 05:06:50.975582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.332 05:06:51 -- accel/accel.sh@21 -- # val= 00:08:01.332 05:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.332 05:06:51 -- accel/accel.sh@20 -- # IFS=: 00:08:01.332 05:06:51 -- accel/accel.sh@20 -- # read -r var val 00:08:01.332 05:06:51 -- accel/accel.sh@21 -- # val= 00:08:01.332 05:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.332 05:06:51 -- accel/accel.sh@20 -- # IFS=: 00:08:01.332 05:06:51 -- accel/accel.sh@20 -- # read -r var val 00:08:01.332 05:06:51 -- accel/accel.sh@21 -- # val= 00:08:01.332 05:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.332 05:06:51 -- accel/accel.sh@20 -- # IFS=: 00:08:01.332 05:06:51 -- accel/accel.sh@20 -- # read -r var val 00:08:01.332 05:06:51 -- accel/accel.sh@21 -- # val=0x1 00:08:01.332 05:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.332 05:06:51 -- accel/accel.sh@20 -- # IFS=: 00:08:01.332 05:06:51 -- accel/accel.sh@20 -- # read -r var val 00:08:01.332 05:06:51 -- accel/accel.sh@21 -- # val= 00:08:01.332 05:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.332 05:06:51 -- accel/accel.sh@20 -- # IFS=: 00:08:01.332 05:06:51 -- accel/accel.sh@20 -- # read -r var val 00:08:01.332 05:06:51 -- accel/accel.sh@21 -- # val= 00:08:01.332 05:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.332 05:06:51 -- accel/accel.sh@20 -- # IFS=: 00:08:01.332 05:06:51 -- accel/accel.sh@20 -- # read -r var val 00:08:01.332 05:06:51 -- accel/accel.sh@21 -- # val=decompress 00:08:01.332 05:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.332 05:06:51 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:01.332 05:06:51 -- accel/accel.sh@20 -- # IFS=: 00:08:01.332 05:06:51 -- accel/accel.sh@20 -- # read -r var val 00:08:01.332 05:06:51 -- accel/accel.sh@21 -- # val='111250 bytes' 00:08:01.332 05:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.332 05:06:51 -- accel/accel.sh@20 -- # IFS=: 00:08:01.332 05:06:51 -- accel/accel.sh@20 -- # read -r var val 00:08:01.332 05:06:51 -- accel/accel.sh@21 -- # val= 00:08:01.332 05:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.332 05:06:51 -- accel/accel.sh@20 -- # IFS=: 00:08:01.332 05:06:51 -- accel/accel.sh@20 -- # read -r var val 00:08:01.332 05:06:51 -- accel/accel.sh@21 -- # val=software 00:08:01.332 05:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.332 05:06:51 -- accel/accel.sh@23 -- # accel_module=software 00:08:01.332 05:06:51 -- accel/accel.sh@20 -- # IFS=: 00:08:01.332 05:06:51 -- accel/accel.sh@20 -- # read -r var val 00:08:01.332 05:06:51 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:01.332 05:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.332 05:06:51 -- accel/accel.sh@20 -- # IFS=: 00:08:01.332 05:06:51 -- accel/accel.sh@20 -- # read -r var val 00:08:01.332 05:06:51 -- accel/accel.sh@21 -- # val=32 00:08:01.332 05:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.332 05:06:51 -- accel/accel.sh@20 -- # IFS=: 00:08:01.332 05:06:51 -- accel/accel.sh@20 -- # read -r var val 00:08:01.332 05:06:51 -- accel/accel.sh@21 -- # val=32 00:08:01.332 05:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.332 05:06:51 -- accel/accel.sh@20 -- # IFS=: 00:08:01.332 05:06:51 -- accel/accel.sh@20 -- # read -r var val 00:08:01.332 05:06:51 -- accel/accel.sh@21 -- # val=1 00:08:01.332 05:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.332 05:06:51 -- accel/accel.sh@20 -- # IFS=: 00:08:01.332 05:06:51 -- accel/accel.sh@20 -- # read -r var val 00:08:01.332 05:06:51 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:01.332 05:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.332 05:06:51 -- accel/accel.sh@20 -- # IFS=: 00:08:01.332 05:06:51 -- accel/accel.sh@20 -- # read -r var val 00:08:01.332 05:06:51 -- accel/accel.sh@21 -- # val=Yes 00:08:01.332 05:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.332 05:06:51 -- accel/accel.sh@20 -- # IFS=: 00:08:01.332 05:06:51 -- accel/accel.sh@20 -- # read -r var val 00:08:01.332 05:06:51 -- accel/accel.sh@21 -- # val= 00:08:01.332 05:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.332 05:06:51 -- accel/accel.sh@20 -- # IFS=: 00:08:01.332 05:06:51 -- accel/accel.sh@20 -- # read -r var val 00:08:01.332 05:06:51 -- accel/accel.sh@21 -- # val= 00:08:01.332 05:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.332 05:06:51 -- accel/accel.sh@20 -- # IFS=: 00:08:01.332 05:06:51 -- accel/accel.sh@20 -- # read -r var val 00:08:02.710 05:06:52 -- accel/accel.sh@21 -- # val= 00:08:02.710 05:06:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.710 05:06:52 -- accel/accel.sh@20 -- # IFS=: 00:08:02.710 05:06:52 -- accel/accel.sh@20 -- # read -r var val 00:08:02.710 05:06:52 -- accel/accel.sh@21 -- # val= 00:08:02.710 05:06:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.710 05:06:52 -- accel/accel.sh@20 -- # IFS=: 00:08:02.710 05:06:52 -- accel/accel.sh@20 -- # read -r var val 00:08:02.710 05:06:52 -- accel/accel.sh@21 -- # val= 00:08:02.710 05:06:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.710 05:06:52 -- accel/accel.sh@20 -- # IFS=: 00:08:02.710 05:06:52 -- accel/accel.sh@20 -- # read -r var val 00:08:02.710 05:06:52 -- accel/accel.sh@21 -- # val= 00:08:02.710 05:06:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.710 05:06:52 -- accel/accel.sh@20 -- # IFS=: 00:08:02.710 05:06:52 -- accel/accel.sh@20 -- # read -r var val 00:08:02.710 05:06:52 -- accel/accel.sh@21 -- # val= 00:08:02.710 05:06:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.710 05:06:52 -- accel/accel.sh@20 -- # IFS=: 00:08:02.710 05:06:52 -- accel/accel.sh@20 -- # read -r var val 00:08:02.710 05:06:52 -- accel/accel.sh@21 -- # val= 00:08:02.710 05:06:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.710 05:06:52 -- accel/accel.sh@20 -- # IFS=: 00:08:02.710 05:06:52 -- accel/accel.sh@20 -- # read -r var val 00:08:02.710 05:06:52 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:02.710 05:06:52 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:02.710 05:06:52 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:02.710 00:08:02.710 real 0m2.711s 00:08:02.710 user 0m2.345s 00:08:02.710 sys 0m0.158s 00:08:02.710 05:06:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:02.710 ************************************ 00:08:02.710 END TEST accel_decmop_full 00:08:02.710 ************************************ 00:08:02.710 05:06:52 -- common/autotest_common.sh@10 -- # set +x 00:08:02.710 05:06:52 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:02.710 05:06:52 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:08:02.710 05:06:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:02.710 05:06:52 -- common/autotest_common.sh@10 -- # set +x 00:08:02.710 ************************************ 00:08:02.710 START TEST accel_decomp_mcore 00:08:02.710 ************************************ 00:08:02.710 05:06:52 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:02.710 05:06:52 -- accel/accel.sh@16 -- # local accel_opc 00:08:02.710 05:06:52 -- accel/accel.sh@17 -- # local accel_module 00:08:02.710 05:06:52 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:02.710 05:06:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:02.710 05:06:52 -- accel/accel.sh@12 -- # build_accel_config 00:08:02.710 05:06:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:02.710 05:06:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:02.710 05:06:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:02.710 05:06:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:02.710 05:06:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:02.710 05:06:52 -- accel/accel.sh@41 -- # local IFS=, 00:08:02.710 05:06:52 -- accel/accel.sh@42 -- # jq -r . 00:08:02.710 [2024-12-08 05:06:52.205375] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:02.710 [2024-12-08 05:06:52.205613] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69046 ] 00:08:02.710 [2024-12-08 05:06:52.341047] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:02.710 [2024-12-08 05:06:52.380443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:02.710 [2024-12-08 05:06:52.380560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:02.710 [2024-12-08 05:06:52.380684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.710 [2024-12-08 05:06:52.380707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:04.084 05:06:53 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:04.084 00:08:04.084 SPDK Configuration: 00:08:04.084 Core mask: 0xf 00:08:04.084 00:08:04.084 Accel Perf Configuration: 00:08:04.084 Workload Type: decompress 00:08:04.084 Transfer size: 4096 bytes 00:08:04.084 Vector count 1 00:08:04.084 Module: software 00:08:04.084 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:04.084 Queue depth: 32 00:08:04.084 Allocate depth: 32 00:08:04.084 # threads/core: 1 00:08:04.084 Run time: 1 seconds 00:08:04.084 Verify: Yes 00:08:04.084 00:08:04.084 Running for 1 seconds... 00:08:04.084 00:08:04.084 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:04.084 ------------------------------------------------------------------------------------ 00:08:04.084 0,0 56512/s 104 MiB/s 0 0 00:08:04.084 3,0 54912/s 101 MiB/s 0 0 00:08:04.084 2,0 50112/s 92 MiB/s 0 0 00:08:04.084 1,0 54208/s 99 MiB/s 0 0 00:08:04.084 ==================================================================================== 00:08:04.084 Total 215744/s 842 MiB/s 0 0' 00:08:04.084 05:06:53 -- accel/accel.sh@20 -- # IFS=: 00:08:04.084 05:06:53 -- accel/accel.sh@20 -- # read -r var val 00:08:04.084 05:06:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:04.084 05:06:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:04.084 05:06:53 -- accel/accel.sh@12 -- # build_accel_config 00:08:04.084 05:06:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:04.084 05:06:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:04.084 05:06:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:04.084 05:06:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:04.084 05:06:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:04.084 05:06:53 -- accel/accel.sh@41 -- # local IFS=, 00:08:04.084 05:06:53 -- accel/accel.sh@42 -- # jq -r . 00:08:04.084 [2024-12-08 05:06:53.546962] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:04.084 [2024-12-08 05:06:53.547223] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69063 ] 00:08:04.084 [2024-12-08 05:06:53.687439] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:04.084 [2024-12-08 05:06:53.731092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:04.084 [2024-12-08 05:06:53.731248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:04.084 [2024-12-08 05:06:53.731346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:04.084 [2024-12-08 05:06:53.731574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.084 05:06:53 -- accel/accel.sh@21 -- # val= 00:08:04.084 05:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.084 05:06:53 -- accel/accel.sh@20 -- # IFS=: 00:08:04.084 05:06:53 -- accel/accel.sh@20 -- # read -r var val 00:08:04.084 05:06:53 -- accel/accel.sh@21 -- # val= 00:08:04.084 05:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.084 05:06:53 -- accel/accel.sh@20 -- # IFS=: 00:08:04.084 05:06:53 -- accel/accel.sh@20 -- # read -r var val 00:08:04.084 05:06:53 -- accel/accel.sh@21 -- # val= 00:08:04.084 05:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.084 05:06:53 -- accel/accel.sh@20 -- # IFS=: 00:08:04.084 05:06:53 -- accel/accel.sh@20 -- # read -r var val 00:08:04.084 05:06:53 -- accel/accel.sh@21 -- # val=0xf 00:08:04.084 05:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.084 05:06:53 -- accel/accel.sh@20 -- # IFS=: 00:08:04.084 05:06:53 -- accel/accel.sh@20 -- # read -r var val 00:08:04.084 05:06:53 -- accel/accel.sh@21 -- # val= 00:08:04.084 05:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.084 05:06:53 -- accel/accel.sh@20 -- # IFS=: 00:08:04.084 05:06:53 -- accel/accel.sh@20 -- # read -r var val 00:08:04.084 05:06:53 -- accel/accel.sh@21 -- # val= 00:08:04.084 05:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.084 05:06:53 -- accel/accel.sh@20 -- # IFS=: 00:08:04.084 05:06:53 -- accel/accel.sh@20 -- # read -r var val 00:08:04.084 05:06:53 -- accel/accel.sh@21 -- # val=decompress 00:08:04.084 05:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.084 05:06:53 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:04.084 05:06:53 -- accel/accel.sh@20 -- # IFS=: 00:08:04.084 05:06:53 -- accel/accel.sh@20 -- # read -r var val 00:08:04.084 05:06:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:04.084 05:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.084 05:06:53 -- accel/accel.sh@20 -- # IFS=: 00:08:04.084 05:06:53 -- accel/accel.sh@20 -- # read -r var val 00:08:04.084 05:06:53 -- accel/accel.sh@21 -- # val= 00:08:04.084 05:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.084 05:06:53 -- accel/accel.sh@20 -- # IFS=: 00:08:04.084 05:06:53 -- accel/accel.sh@20 -- # read -r var val 00:08:04.084 05:06:53 -- accel/accel.sh@21 -- # val=software 00:08:04.084 05:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.084 05:06:53 -- accel/accel.sh@23 -- # accel_module=software 00:08:04.084 05:06:53 -- accel/accel.sh@20 -- # IFS=: 00:08:04.084 05:06:53 -- accel/accel.sh@20 -- # read -r var val 00:08:04.084 05:06:53 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:04.084 05:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.084 05:06:53 -- accel/accel.sh@20 -- # IFS=: 00:08:04.084 05:06:53 -- accel/accel.sh@20 -- # read -r var val 00:08:04.084 05:06:53 -- accel/accel.sh@21 -- # val=32 00:08:04.084 05:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.084 05:06:53 -- accel/accel.sh@20 -- # IFS=: 00:08:04.084 05:06:53 -- accel/accel.sh@20 -- # read -r var val 00:08:04.084 05:06:53 -- accel/accel.sh@21 -- # val=32 00:08:04.084 05:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.084 05:06:53 -- accel/accel.sh@20 -- # IFS=: 00:08:04.084 05:06:53 -- accel/accel.sh@20 -- # read -r var val 00:08:04.084 05:06:53 -- accel/accel.sh@21 -- # val=1 00:08:04.084 05:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.084 05:06:53 -- accel/accel.sh@20 -- # IFS=: 00:08:04.084 05:06:53 -- accel/accel.sh@20 -- # read -r var val 00:08:04.084 05:06:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:04.084 05:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.084 05:06:53 -- accel/accel.sh@20 -- # IFS=: 00:08:04.084 05:06:53 -- accel/accel.sh@20 -- # read -r var val 00:08:04.084 05:06:53 -- accel/accel.sh@21 -- # val=Yes 00:08:04.084 05:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.084 05:06:53 -- accel/accel.sh@20 -- # IFS=: 00:08:04.084 05:06:53 -- accel/accel.sh@20 -- # read -r var val 00:08:04.084 05:06:53 -- accel/accel.sh@21 -- # val= 00:08:04.084 05:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.084 05:06:53 -- accel/accel.sh@20 -- # IFS=: 00:08:04.084 05:06:53 -- accel/accel.sh@20 -- # read -r var val 00:08:04.084 05:06:53 -- accel/accel.sh@21 -- # val= 00:08:04.084 05:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.084 05:06:53 -- accel/accel.sh@20 -- # IFS=: 00:08:04.084 05:06:53 -- accel/accel.sh@20 -- # read -r var val 00:08:05.460 05:06:54 -- accel/accel.sh@21 -- # val= 00:08:05.460 05:06:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.460 05:06:54 -- accel/accel.sh@20 -- # IFS=: 00:08:05.460 05:06:54 -- accel/accel.sh@20 -- # read -r var val 00:08:05.460 05:06:54 -- accel/accel.sh@21 -- # val= 00:08:05.460 05:06:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.460 05:06:54 -- accel/accel.sh@20 -- # IFS=: 00:08:05.460 05:06:54 -- accel/accel.sh@20 -- # read -r var val 00:08:05.460 05:06:54 -- accel/accel.sh@21 -- # val= 00:08:05.460 05:06:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.460 05:06:54 -- accel/accel.sh@20 -- # IFS=: 00:08:05.460 05:06:54 -- accel/accel.sh@20 -- # read -r var val 00:08:05.460 05:06:54 -- accel/accel.sh@21 -- # val= 00:08:05.460 05:06:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.460 05:06:54 -- accel/accel.sh@20 -- # IFS=: 00:08:05.460 05:06:54 -- accel/accel.sh@20 -- # read -r var val 00:08:05.460 05:06:54 -- accel/accel.sh@21 -- # val= 00:08:05.460 05:06:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.460 05:06:54 -- accel/accel.sh@20 -- # IFS=: 00:08:05.460 05:06:54 -- accel/accel.sh@20 -- # read -r var val 00:08:05.460 05:06:54 -- accel/accel.sh@21 -- # val= 00:08:05.460 05:06:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.460 05:06:54 -- accel/accel.sh@20 -- # IFS=: 00:08:05.460 05:06:54 -- accel/accel.sh@20 -- # read -r var val 00:08:05.460 05:06:54 -- accel/accel.sh@21 -- # val= 00:08:05.460 05:06:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.460 05:06:54 -- accel/accel.sh@20 -- # IFS=: 00:08:05.460 05:06:54 -- accel/accel.sh@20 -- # read -r var val 00:08:05.460 05:06:54 -- accel/accel.sh@21 -- # val= 00:08:05.460 05:06:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.460 05:06:54 -- accel/accel.sh@20 -- # IFS=: 00:08:05.460 05:06:54 -- accel/accel.sh@20 -- # read -r var val 00:08:05.460 05:06:54 -- accel/accel.sh@21 -- # val= 00:08:05.460 05:06:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.460 05:06:54 -- accel/accel.sh@20 -- # IFS=: 00:08:05.460 05:06:54 -- accel/accel.sh@20 -- # read -r var val 00:08:05.460 05:06:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:05.460 05:06:54 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:05.460 ************************************ 00:08:05.460 END TEST accel_decomp_mcore 00:08:05.460 ************************************ 00:08:05.460 05:06:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:05.460 00:08:05.460 real 0m2.700s 00:08:05.460 user 0m8.779s 00:08:05.460 sys 0m0.187s 00:08:05.460 05:06:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:05.460 05:06:54 -- common/autotest_common.sh@10 -- # set +x 00:08:05.460 05:06:54 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:05.460 05:06:54 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:05.460 05:06:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:05.460 05:06:54 -- common/autotest_common.sh@10 -- # set +x 00:08:05.460 ************************************ 00:08:05.460 START TEST accel_decomp_full_mcore 00:08:05.460 ************************************ 00:08:05.460 05:06:54 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:05.460 05:06:54 -- accel/accel.sh@16 -- # local accel_opc 00:08:05.460 05:06:54 -- accel/accel.sh@17 -- # local accel_module 00:08:05.460 05:06:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:05.460 05:06:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:05.460 05:06:54 -- accel/accel.sh@12 -- # build_accel_config 00:08:05.460 05:06:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:05.460 05:06:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:05.460 05:06:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:05.460 05:06:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:05.460 05:06:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:05.460 05:06:54 -- accel/accel.sh@41 -- # local IFS=, 00:08:05.460 05:06:54 -- accel/accel.sh@42 -- # jq -r . 00:08:05.460 [2024-12-08 05:06:54.949317] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:05.460 [2024-12-08 05:06:54.949540] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69106 ] 00:08:05.460 [2024-12-08 05:06:55.080517] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:05.460 [2024-12-08 05:06:55.122110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:05.460 [2024-12-08 05:06:55.122234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:05.460 [2024-12-08 05:06:55.122353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.460 [2024-12-08 05:06:55.122353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:06.839 05:06:56 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:06.839 00:08:06.839 SPDK Configuration: 00:08:06.839 Core mask: 0xf 00:08:06.839 00:08:06.839 Accel Perf Configuration: 00:08:06.839 Workload Type: decompress 00:08:06.839 Transfer size: 111250 bytes 00:08:06.839 Vector count 1 00:08:06.839 Module: software 00:08:06.839 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:06.839 Queue depth: 32 00:08:06.839 Allocate depth: 32 00:08:06.839 # threads/core: 1 00:08:06.839 Run time: 1 seconds 00:08:06.839 Verify: Yes 00:08:06.839 00:08:06.839 Running for 1 seconds... 00:08:06.839 00:08:06.839 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:06.839 ------------------------------------------------------------------------------------ 00:08:06.839 0,0 4512/s 186 MiB/s 0 0 00:08:06.839 3,0 4480/s 185 MiB/s 0 0 00:08:06.839 2,0 4544/s 187 MiB/s 0 0 00:08:06.839 1,0 4576/s 189 MiB/s 0 0 00:08:06.839 ==================================================================================== 00:08:06.839 Total 18112/s 1921 MiB/s 0 0' 00:08:06.839 05:06:56 -- accel/accel.sh@20 -- # IFS=: 00:08:06.839 05:06:56 -- accel/accel.sh@20 -- # read -r var val 00:08:06.839 05:06:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:06.839 05:06:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:06.839 05:06:56 -- accel/accel.sh@12 -- # build_accel_config 00:08:06.839 05:06:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:06.839 05:06:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:06.839 05:06:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:06.839 05:06:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:06.839 05:06:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:06.839 05:06:56 -- accel/accel.sh@41 -- # local IFS=, 00:08:06.839 05:06:56 -- accel/accel.sh@42 -- # jq -r . 00:08:06.839 [2024-12-08 05:06:56.306204] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:06.839 [2024-12-08 05:06:56.306298] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69123 ] 00:08:06.839 [2024-12-08 05:06:56.444254] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:06.839 [2024-12-08 05:06:56.488055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:06.839 [2024-12-08 05:06:56.488175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:06.839 [2024-12-08 05:06:56.488321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:06.839 [2024-12-08 05:06:56.488619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.839 05:06:56 -- accel/accel.sh@21 -- # val= 00:08:06.839 05:06:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.839 05:06:56 -- accel/accel.sh@20 -- # IFS=: 00:08:06.839 05:06:56 -- accel/accel.sh@20 -- # read -r var val 00:08:06.839 05:06:56 -- accel/accel.sh@21 -- # val= 00:08:06.839 05:06:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.839 05:06:56 -- accel/accel.sh@20 -- # IFS=: 00:08:06.839 05:06:56 -- accel/accel.sh@20 -- # read -r var val 00:08:06.839 05:06:56 -- accel/accel.sh@21 -- # val= 00:08:06.839 05:06:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.839 05:06:56 -- accel/accel.sh@20 -- # IFS=: 00:08:06.839 05:06:56 -- accel/accel.sh@20 -- # read -r var val 00:08:06.839 05:06:56 -- accel/accel.sh@21 -- # val=0xf 00:08:06.839 05:06:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.839 05:06:56 -- accel/accel.sh@20 -- # IFS=: 00:08:06.839 05:06:56 -- accel/accel.sh@20 -- # read -r var val 00:08:06.839 05:06:56 -- accel/accel.sh@21 -- # val= 00:08:06.839 05:06:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.839 05:06:56 -- accel/accel.sh@20 -- # IFS=: 00:08:06.839 05:06:56 -- accel/accel.sh@20 -- # read -r var val 00:08:06.839 05:06:56 -- accel/accel.sh@21 -- # val= 00:08:06.839 05:06:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.839 05:06:56 -- accel/accel.sh@20 -- # IFS=: 00:08:06.839 05:06:56 -- accel/accel.sh@20 -- # read -r var val 00:08:06.839 05:06:56 -- accel/accel.sh@21 -- # val=decompress 00:08:06.839 05:06:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.839 05:06:56 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:06.839 05:06:56 -- accel/accel.sh@20 -- # IFS=: 00:08:06.839 05:06:56 -- accel/accel.sh@20 -- # read -r var val 00:08:06.839 05:06:56 -- accel/accel.sh@21 -- # val='111250 bytes' 00:08:06.839 05:06:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.839 05:06:56 -- accel/accel.sh@20 -- # IFS=: 00:08:06.839 05:06:56 -- accel/accel.sh@20 -- # read -r var val 00:08:06.839 05:06:56 -- accel/accel.sh@21 -- # val= 00:08:06.839 05:06:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.839 05:06:56 -- accel/accel.sh@20 -- # IFS=: 00:08:06.839 05:06:56 -- accel/accel.sh@20 -- # read -r var val 00:08:06.839 05:06:56 -- accel/accel.sh@21 -- # val=software 00:08:06.839 05:06:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.839 05:06:56 -- accel/accel.sh@23 -- # accel_module=software 00:08:06.839 05:06:56 -- accel/accel.sh@20 -- # IFS=: 00:08:06.839 05:06:56 -- accel/accel.sh@20 -- # read -r var val 00:08:06.839 05:06:56 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:06.839 05:06:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.839 05:06:56 -- accel/accel.sh@20 -- # IFS=: 00:08:06.839 05:06:56 -- accel/accel.sh@20 -- # read -r var val 00:08:06.839 05:06:56 -- accel/accel.sh@21 -- # val=32 00:08:06.839 05:06:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.839 05:06:56 -- accel/accel.sh@20 -- # IFS=: 00:08:06.839 05:06:56 -- accel/accel.sh@20 -- # read -r var val 00:08:06.839 05:06:56 -- accel/accel.sh@21 -- # val=32 00:08:06.839 05:06:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.839 05:06:56 -- accel/accel.sh@20 -- # IFS=: 00:08:06.839 05:06:56 -- accel/accel.sh@20 -- # read -r var val 00:08:06.839 05:06:56 -- accel/accel.sh@21 -- # val=1 00:08:06.839 05:06:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.839 05:06:56 -- accel/accel.sh@20 -- # IFS=: 00:08:06.839 05:06:56 -- accel/accel.sh@20 -- # read -r var val 00:08:06.839 05:06:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:06.839 05:06:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.839 05:06:56 -- accel/accel.sh@20 -- # IFS=: 00:08:06.839 05:06:56 -- accel/accel.sh@20 -- # read -r var val 00:08:06.839 05:06:56 -- accel/accel.sh@21 -- # val=Yes 00:08:06.839 05:06:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.839 05:06:56 -- accel/accel.sh@20 -- # IFS=: 00:08:06.839 05:06:56 -- accel/accel.sh@20 -- # read -r var val 00:08:06.839 05:06:56 -- accel/accel.sh@21 -- # val= 00:08:06.839 05:06:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.839 05:06:56 -- accel/accel.sh@20 -- # IFS=: 00:08:06.839 05:06:56 -- accel/accel.sh@20 -- # read -r var val 00:08:06.839 05:06:56 -- accel/accel.sh@21 -- # val= 00:08:06.839 05:06:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.839 05:06:56 -- accel/accel.sh@20 -- # IFS=: 00:08:06.839 05:06:56 -- accel/accel.sh@20 -- # read -r var val 00:08:08.218 05:06:57 -- accel/accel.sh@21 -- # val= 00:08:08.218 05:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.218 05:06:57 -- accel/accel.sh@20 -- # IFS=: 00:08:08.218 05:06:57 -- accel/accel.sh@20 -- # read -r var val 00:08:08.218 05:06:57 -- accel/accel.sh@21 -- # val= 00:08:08.218 05:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.218 05:06:57 -- accel/accel.sh@20 -- # IFS=: 00:08:08.218 05:06:57 -- accel/accel.sh@20 -- # read -r var val 00:08:08.218 05:06:57 -- accel/accel.sh@21 -- # val= 00:08:08.218 05:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.218 05:06:57 -- accel/accel.sh@20 -- # IFS=: 00:08:08.218 05:06:57 -- accel/accel.sh@20 -- # read -r var val 00:08:08.218 05:06:57 -- accel/accel.sh@21 -- # val= 00:08:08.218 05:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.218 05:06:57 -- accel/accel.sh@20 -- # IFS=: 00:08:08.218 05:06:57 -- accel/accel.sh@20 -- # read -r var val 00:08:08.218 05:06:57 -- accel/accel.sh@21 -- # val= 00:08:08.218 05:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.218 05:06:57 -- accel/accel.sh@20 -- # IFS=: 00:08:08.218 05:06:57 -- accel/accel.sh@20 -- # read -r var val 00:08:08.218 05:06:57 -- accel/accel.sh@21 -- # val= 00:08:08.218 05:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.218 05:06:57 -- accel/accel.sh@20 -- # IFS=: 00:08:08.218 05:06:57 -- accel/accel.sh@20 -- # read -r var val 00:08:08.218 05:06:57 -- accel/accel.sh@21 -- # val= 00:08:08.218 05:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.218 05:06:57 -- accel/accel.sh@20 -- # IFS=: 00:08:08.218 05:06:57 -- accel/accel.sh@20 -- # read -r var val 00:08:08.218 05:06:57 -- accel/accel.sh@21 -- # val= 00:08:08.218 05:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.218 05:06:57 -- accel/accel.sh@20 -- # IFS=: 00:08:08.218 05:06:57 -- accel/accel.sh@20 -- # read -r var val 00:08:08.218 05:06:57 -- accel/accel.sh@21 -- # val= 00:08:08.218 ************************************ 00:08:08.218 END TEST accel_decomp_full_mcore 00:08:08.218 ************************************ 00:08:08.218 05:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.218 05:06:57 -- accel/accel.sh@20 -- # IFS=: 00:08:08.218 05:06:57 -- accel/accel.sh@20 -- # read -r var val 00:08:08.218 05:06:57 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:08.218 05:06:57 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:08.218 05:06:57 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:08.218 00:08:08.218 real 0m2.734s 00:08:08.218 user 0m8.880s 00:08:08.218 sys 0m0.214s 00:08:08.218 05:06:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:08.218 05:06:57 -- common/autotest_common.sh@10 -- # set +x 00:08:08.218 05:06:57 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:08.218 05:06:57 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:08:08.218 05:06:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:08.218 05:06:57 -- common/autotest_common.sh@10 -- # set +x 00:08:08.218 ************************************ 00:08:08.218 START TEST accel_decomp_mthread 00:08:08.218 ************************************ 00:08:08.218 05:06:57 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:08.218 05:06:57 -- accel/accel.sh@16 -- # local accel_opc 00:08:08.218 05:06:57 -- accel/accel.sh@17 -- # local accel_module 00:08:08.218 05:06:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:08.218 05:06:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:08.218 05:06:57 -- accel/accel.sh@12 -- # build_accel_config 00:08:08.218 05:06:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:08.218 05:06:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:08.218 05:06:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:08.218 05:06:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:08.219 05:06:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:08.219 05:06:57 -- accel/accel.sh@41 -- # local IFS=, 00:08:08.219 05:06:57 -- accel/accel.sh@42 -- # jq -r . 00:08:08.219 [2024-12-08 05:06:57.728123] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:08.219 [2024-12-08 05:06:57.728380] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69161 ] 00:08:08.219 [2024-12-08 05:06:57.864107] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.219 [2024-12-08 05:06:57.903650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.598 05:06:59 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:09.598 00:08:09.598 SPDK Configuration: 00:08:09.598 Core mask: 0x1 00:08:09.598 00:08:09.598 Accel Perf Configuration: 00:08:09.598 Workload Type: decompress 00:08:09.598 Transfer size: 4096 bytes 00:08:09.598 Vector count 1 00:08:09.598 Module: software 00:08:09.598 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:09.598 Queue depth: 32 00:08:09.598 Allocate depth: 32 00:08:09.598 # threads/core: 2 00:08:09.598 Run time: 1 seconds 00:08:09.598 Verify: Yes 00:08:09.598 00:08:09.598 Running for 1 seconds... 00:08:09.598 00:08:09.598 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:09.598 ------------------------------------------------------------------------------------ 00:08:09.598 0,1 35456/s 65 MiB/s 0 0 00:08:09.598 0,0 35392/s 65 MiB/s 0 0 00:08:09.598 ==================================================================================== 00:08:09.598 Total 70848/s 276 MiB/s 0 0' 00:08:09.598 05:06:59 -- accel/accel.sh@20 -- # IFS=: 00:08:09.598 05:06:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:09.598 05:06:59 -- accel/accel.sh@20 -- # read -r var val 00:08:09.598 05:06:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:09.598 05:06:59 -- accel/accel.sh@12 -- # build_accel_config 00:08:09.598 05:06:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:09.598 05:06:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:09.598 05:06:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:09.598 05:06:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:09.598 05:06:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:09.598 05:06:59 -- accel/accel.sh@41 -- # local IFS=, 00:08:09.598 05:06:59 -- accel/accel.sh@42 -- # jq -r . 00:08:09.598 [2024-12-08 05:06:59.068502] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:09.598 [2024-12-08 05:06:59.068588] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69175 ] 00:08:09.598 [2024-12-08 05:06:59.207142] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.598 [2024-12-08 05:06:59.246594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.598 05:06:59 -- accel/accel.sh@21 -- # val= 00:08:09.598 05:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.598 05:06:59 -- accel/accel.sh@20 -- # IFS=: 00:08:09.598 05:06:59 -- accel/accel.sh@20 -- # read -r var val 00:08:09.598 05:06:59 -- accel/accel.sh@21 -- # val= 00:08:09.598 05:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.598 05:06:59 -- accel/accel.sh@20 -- # IFS=: 00:08:09.598 05:06:59 -- accel/accel.sh@20 -- # read -r var val 00:08:09.598 05:06:59 -- accel/accel.sh@21 -- # val= 00:08:09.598 05:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.598 05:06:59 -- accel/accel.sh@20 -- # IFS=: 00:08:09.598 05:06:59 -- accel/accel.sh@20 -- # read -r var val 00:08:09.598 05:06:59 -- accel/accel.sh@21 -- # val=0x1 00:08:09.598 05:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.598 05:06:59 -- accel/accel.sh@20 -- # IFS=: 00:08:09.598 05:06:59 -- accel/accel.sh@20 -- # read -r var val 00:08:09.598 05:06:59 -- accel/accel.sh@21 -- # val= 00:08:09.598 05:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.598 05:06:59 -- accel/accel.sh@20 -- # IFS=: 00:08:09.598 05:06:59 -- accel/accel.sh@20 -- # read -r var val 00:08:09.598 05:06:59 -- accel/accel.sh@21 -- # val= 00:08:09.598 05:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.598 05:06:59 -- accel/accel.sh@20 -- # IFS=: 00:08:09.598 05:06:59 -- accel/accel.sh@20 -- # read -r var val 00:08:09.598 05:06:59 -- accel/accel.sh@21 -- # val=decompress 00:08:09.598 05:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.598 05:06:59 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:09.598 05:06:59 -- accel/accel.sh@20 -- # IFS=: 00:08:09.598 05:06:59 -- accel/accel.sh@20 -- # read -r var val 00:08:09.598 05:06:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:09.598 05:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.598 05:06:59 -- accel/accel.sh@20 -- # IFS=: 00:08:09.598 05:06:59 -- accel/accel.sh@20 -- # read -r var val 00:08:09.598 05:06:59 -- accel/accel.sh@21 -- # val= 00:08:09.598 05:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.598 05:06:59 -- accel/accel.sh@20 -- # IFS=: 00:08:09.598 05:06:59 -- accel/accel.sh@20 -- # read -r var val 00:08:09.598 05:06:59 -- accel/accel.sh@21 -- # val=software 00:08:09.598 05:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.598 05:06:59 -- accel/accel.sh@23 -- # accel_module=software 00:08:09.598 05:06:59 -- accel/accel.sh@20 -- # IFS=: 00:08:09.598 05:06:59 -- accel/accel.sh@20 -- # read -r var val 00:08:09.598 05:06:59 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:09.598 05:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.598 05:06:59 -- accel/accel.sh@20 -- # IFS=: 00:08:09.598 05:06:59 -- accel/accel.sh@20 -- # read -r var val 00:08:09.598 05:06:59 -- accel/accel.sh@21 -- # val=32 00:08:09.598 05:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.598 05:06:59 -- accel/accel.sh@20 -- # IFS=: 00:08:09.598 05:06:59 -- accel/accel.sh@20 -- # read -r var val 00:08:09.598 05:06:59 -- accel/accel.sh@21 -- # val=32 00:08:09.598 05:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.598 05:06:59 -- accel/accel.sh@20 -- # IFS=: 00:08:09.598 05:06:59 -- accel/accel.sh@20 -- # read -r var val 00:08:09.598 05:06:59 -- accel/accel.sh@21 -- # val=2 00:08:09.598 05:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.598 05:06:59 -- accel/accel.sh@20 -- # IFS=: 00:08:09.598 05:06:59 -- accel/accel.sh@20 -- # read -r var val 00:08:09.598 05:06:59 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:09.598 05:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.598 05:06:59 -- accel/accel.sh@20 -- # IFS=: 00:08:09.598 05:06:59 -- accel/accel.sh@20 -- # read -r var val 00:08:09.598 05:06:59 -- accel/accel.sh@21 -- # val=Yes 00:08:09.598 05:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.598 05:06:59 -- accel/accel.sh@20 -- # IFS=: 00:08:09.598 05:06:59 -- accel/accel.sh@20 -- # read -r var val 00:08:09.598 05:06:59 -- accel/accel.sh@21 -- # val= 00:08:09.598 05:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.598 05:06:59 -- accel/accel.sh@20 -- # IFS=: 00:08:09.598 05:06:59 -- accel/accel.sh@20 -- # read -r var val 00:08:09.598 05:06:59 -- accel/accel.sh@21 -- # val= 00:08:09.598 05:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.598 05:06:59 -- accel/accel.sh@20 -- # IFS=: 00:08:09.598 05:06:59 -- accel/accel.sh@20 -- # read -r var val 00:08:10.979 05:07:00 -- accel/accel.sh@21 -- # val= 00:08:10.979 05:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.979 05:07:00 -- accel/accel.sh@20 -- # IFS=: 00:08:10.979 05:07:00 -- accel/accel.sh@20 -- # read -r var val 00:08:10.979 05:07:00 -- accel/accel.sh@21 -- # val= 00:08:10.979 05:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.979 05:07:00 -- accel/accel.sh@20 -- # IFS=: 00:08:10.979 05:07:00 -- accel/accel.sh@20 -- # read -r var val 00:08:10.979 05:07:00 -- accel/accel.sh@21 -- # val= 00:08:10.979 05:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.979 05:07:00 -- accel/accel.sh@20 -- # IFS=: 00:08:10.979 05:07:00 -- accel/accel.sh@20 -- # read -r var val 00:08:10.979 05:07:00 -- accel/accel.sh@21 -- # val= 00:08:10.979 05:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.979 05:07:00 -- accel/accel.sh@20 -- # IFS=: 00:08:10.979 05:07:00 -- accel/accel.sh@20 -- # read -r var val 00:08:10.979 05:07:00 -- accel/accel.sh@21 -- # val= 00:08:10.979 05:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.979 05:07:00 -- accel/accel.sh@20 -- # IFS=: 00:08:10.979 05:07:00 -- accel/accel.sh@20 -- # read -r var val 00:08:10.979 05:07:00 -- accel/accel.sh@21 -- # val= 00:08:10.979 05:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.979 05:07:00 -- accel/accel.sh@20 -- # IFS=: 00:08:10.979 05:07:00 -- accel/accel.sh@20 -- # read -r var val 00:08:10.979 05:07:00 -- accel/accel.sh@21 -- # val= 00:08:10.979 05:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.979 05:07:00 -- accel/accel.sh@20 -- # IFS=: 00:08:10.979 ************************************ 00:08:10.979 END TEST accel_decomp_mthread 00:08:10.979 ************************************ 00:08:10.979 05:07:00 -- accel/accel.sh@20 -- # read -r var val 00:08:10.979 05:07:00 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:10.979 05:07:00 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:10.979 05:07:00 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:10.979 00:08:10.979 real 0m2.699s 00:08:10.979 user 0m2.316s 00:08:10.979 sys 0m0.175s 00:08:10.979 05:07:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:10.979 05:07:00 -- common/autotest_common.sh@10 -- # set +x 00:08:10.979 05:07:00 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:10.979 05:07:00 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:10.979 05:07:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:10.979 05:07:00 -- common/autotest_common.sh@10 -- # set +x 00:08:10.979 ************************************ 00:08:10.979 START TEST accel_deomp_full_mthread 00:08:10.979 ************************************ 00:08:10.979 05:07:00 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:10.979 05:07:00 -- accel/accel.sh@16 -- # local accel_opc 00:08:10.979 05:07:00 -- accel/accel.sh@17 -- # local accel_module 00:08:10.979 05:07:00 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:10.979 05:07:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:10.979 05:07:00 -- accel/accel.sh@12 -- # build_accel_config 00:08:10.979 05:07:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:10.979 05:07:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:10.979 05:07:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:10.979 05:07:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:10.979 05:07:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:10.979 05:07:00 -- accel/accel.sh@41 -- # local IFS=, 00:08:10.979 05:07:00 -- accel/accel.sh@42 -- # jq -r . 00:08:10.979 [2024-12-08 05:07:00.478781] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:10.979 [2024-12-08 05:07:00.478873] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69209 ] 00:08:10.979 [2024-12-08 05:07:00.616152] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.979 [2024-12-08 05:07:00.659920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.355 05:07:01 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:12.355 00:08:12.355 SPDK Configuration: 00:08:12.355 Core mask: 0x1 00:08:12.355 00:08:12.355 Accel Perf Configuration: 00:08:12.355 Workload Type: decompress 00:08:12.355 Transfer size: 111250 bytes 00:08:12.355 Vector count 1 00:08:12.355 Module: software 00:08:12.355 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:12.355 Queue depth: 32 00:08:12.355 Allocate depth: 32 00:08:12.355 # threads/core: 2 00:08:12.355 Run time: 1 seconds 00:08:12.355 Verify: Yes 00:08:12.355 00:08:12.355 Running for 1 seconds... 00:08:12.355 00:08:12.355 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:12.355 ------------------------------------------------------------------------------------ 00:08:12.355 0,1 2176/s 89 MiB/s 0 0 00:08:12.355 0,0 2144/s 88 MiB/s 0 0 00:08:12.355 ==================================================================================== 00:08:12.355 Total 4320/s 458 MiB/s 0 0' 00:08:12.355 05:07:01 -- accel/accel.sh@20 -- # IFS=: 00:08:12.355 05:07:01 -- accel/accel.sh@20 -- # read -r var val 00:08:12.355 05:07:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:12.355 05:07:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:12.355 05:07:01 -- accel/accel.sh@12 -- # build_accel_config 00:08:12.355 05:07:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:12.355 05:07:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:12.355 05:07:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:12.355 05:07:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:12.355 05:07:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:12.355 05:07:01 -- accel/accel.sh@41 -- # local IFS=, 00:08:12.355 05:07:01 -- accel/accel.sh@42 -- # jq -r . 00:08:12.355 [2024-12-08 05:07:01.857611] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:12.355 [2024-12-08 05:07:01.857894] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69229 ] 00:08:12.355 [2024-12-08 05:07:01.995950] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.355 [2024-12-08 05:07:02.042605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.355 05:07:02 -- accel/accel.sh@21 -- # val= 00:08:12.355 05:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.355 05:07:02 -- accel/accel.sh@20 -- # IFS=: 00:08:12.355 05:07:02 -- accel/accel.sh@20 -- # read -r var val 00:08:12.355 05:07:02 -- accel/accel.sh@21 -- # val= 00:08:12.355 05:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.355 05:07:02 -- accel/accel.sh@20 -- # IFS=: 00:08:12.355 05:07:02 -- accel/accel.sh@20 -- # read -r var val 00:08:12.355 05:07:02 -- accel/accel.sh@21 -- # val= 00:08:12.355 05:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.355 05:07:02 -- accel/accel.sh@20 -- # IFS=: 00:08:12.355 05:07:02 -- accel/accel.sh@20 -- # read -r var val 00:08:12.355 05:07:02 -- accel/accel.sh@21 -- # val=0x1 00:08:12.355 05:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.355 05:07:02 -- accel/accel.sh@20 -- # IFS=: 00:08:12.355 05:07:02 -- accel/accel.sh@20 -- # read -r var val 00:08:12.355 05:07:02 -- accel/accel.sh@21 -- # val= 00:08:12.355 05:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.355 05:07:02 -- accel/accel.sh@20 -- # IFS=: 00:08:12.355 05:07:02 -- accel/accel.sh@20 -- # read -r var val 00:08:12.355 05:07:02 -- accel/accel.sh@21 -- # val= 00:08:12.355 05:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.355 05:07:02 -- accel/accel.sh@20 -- # IFS=: 00:08:12.355 05:07:02 -- accel/accel.sh@20 -- # read -r var val 00:08:12.355 05:07:02 -- accel/accel.sh@21 -- # val=decompress 00:08:12.355 05:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.355 05:07:02 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:12.355 05:07:02 -- accel/accel.sh@20 -- # IFS=: 00:08:12.355 05:07:02 -- accel/accel.sh@20 -- # read -r var val 00:08:12.355 05:07:02 -- accel/accel.sh@21 -- # val='111250 bytes' 00:08:12.355 05:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.355 05:07:02 -- accel/accel.sh@20 -- # IFS=: 00:08:12.355 05:07:02 -- accel/accel.sh@20 -- # read -r var val 00:08:12.355 05:07:02 -- accel/accel.sh@21 -- # val= 00:08:12.355 05:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.355 05:07:02 -- accel/accel.sh@20 -- # IFS=: 00:08:12.355 05:07:02 -- accel/accel.sh@20 -- # read -r var val 00:08:12.355 05:07:02 -- accel/accel.sh@21 -- # val=software 00:08:12.355 05:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.355 05:07:02 -- accel/accel.sh@23 -- # accel_module=software 00:08:12.355 05:07:02 -- accel/accel.sh@20 -- # IFS=: 00:08:12.355 05:07:02 -- accel/accel.sh@20 -- # read -r var val 00:08:12.355 05:07:02 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:12.355 05:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.355 05:07:02 -- accel/accel.sh@20 -- # IFS=: 00:08:12.355 05:07:02 -- accel/accel.sh@20 -- # read -r var val 00:08:12.355 05:07:02 -- accel/accel.sh@21 -- # val=32 00:08:12.355 05:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.355 05:07:02 -- accel/accel.sh@20 -- # IFS=: 00:08:12.355 05:07:02 -- accel/accel.sh@20 -- # read -r var val 00:08:12.355 05:07:02 -- accel/accel.sh@21 -- # val=32 00:08:12.355 05:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.355 05:07:02 -- accel/accel.sh@20 -- # IFS=: 00:08:12.355 05:07:02 -- accel/accel.sh@20 -- # read -r var val 00:08:12.355 05:07:02 -- accel/accel.sh@21 -- # val=2 00:08:12.355 05:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.355 05:07:02 -- accel/accel.sh@20 -- # IFS=: 00:08:12.355 05:07:02 -- accel/accel.sh@20 -- # read -r var val 00:08:12.355 05:07:02 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:12.355 05:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.355 05:07:02 -- accel/accel.sh@20 -- # IFS=: 00:08:12.355 05:07:02 -- accel/accel.sh@20 -- # read -r var val 00:08:12.355 05:07:02 -- accel/accel.sh@21 -- # val=Yes 00:08:12.355 05:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.355 05:07:02 -- accel/accel.sh@20 -- # IFS=: 00:08:12.356 05:07:02 -- accel/accel.sh@20 -- # read -r var val 00:08:12.356 05:07:02 -- accel/accel.sh@21 -- # val= 00:08:12.356 05:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.356 05:07:02 -- accel/accel.sh@20 -- # IFS=: 00:08:12.356 05:07:02 -- accel/accel.sh@20 -- # read -r var val 00:08:12.356 05:07:02 -- accel/accel.sh@21 -- # val= 00:08:12.356 05:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.356 05:07:02 -- accel/accel.sh@20 -- # IFS=: 00:08:12.356 05:07:02 -- accel/accel.sh@20 -- # read -r var val 00:08:13.767 05:07:03 -- accel/accel.sh@21 -- # val= 00:08:13.767 05:07:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.767 05:07:03 -- accel/accel.sh@20 -- # IFS=: 00:08:13.767 05:07:03 -- accel/accel.sh@20 -- # read -r var val 00:08:13.767 05:07:03 -- accel/accel.sh@21 -- # val= 00:08:13.767 05:07:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.767 05:07:03 -- accel/accel.sh@20 -- # IFS=: 00:08:13.767 05:07:03 -- accel/accel.sh@20 -- # read -r var val 00:08:13.767 05:07:03 -- accel/accel.sh@21 -- # val= 00:08:13.767 05:07:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.767 05:07:03 -- accel/accel.sh@20 -- # IFS=: 00:08:13.767 05:07:03 -- accel/accel.sh@20 -- # read -r var val 00:08:13.767 05:07:03 -- accel/accel.sh@21 -- # val= 00:08:13.767 05:07:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.767 05:07:03 -- accel/accel.sh@20 -- # IFS=: 00:08:13.767 05:07:03 -- accel/accel.sh@20 -- # read -r var val 00:08:13.767 05:07:03 -- accel/accel.sh@21 -- # val= 00:08:13.767 05:07:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.767 05:07:03 -- accel/accel.sh@20 -- # IFS=: 00:08:13.767 05:07:03 -- accel/accel.sh@20 -- # read -r var val 00:08:13.767 05:07:03 -- accel/accel.sh@21 -- # val= 00:08:13.767 05:07:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.767 05:07:03 -- accel/accel.sh@20 -- # IFS=: 00:08:13.767 05:07:03 -- accel/accel.sh@20 -- # read -r var val 00:08:13.767 05:07:03 -- accel/accel.sh@21 -- # val= 00:08:13.767 05:07:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.767 05:07:03 -- accel/accel.sh@20 -- # IFS=: 00:08:13.767 05:07:03 -- accel/accel.sh@20 -- # read -r var val 00:08:13.767 05:07:03 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:13.767 05:07:03 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:13.767 05:07:03 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:13.767 00:08:13.767 real 0m2.773s 00:08:13.767 user 0m2.388s 00:08:13.767 sys 0m0.175s 00:08:13.767 05:07:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:13.767 05:07:03 -- common/autotest_common.sh@10 -- # set +x 00:08:13.767 ************************************ 00:08:13.767 END TEST accel_deomp_full_mthread 00:08:13.767 ************************************ 00:08:13.767 05:07:03 -- accel/accel.sh@116 -- # [[ n == y ]] 00:08:13.767 05:07:03 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:13.767 05:07:03 -- accel/accel.sh@129 -- # build_accel_config 00:08:13.767 05:07:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:13.767 05:07:03 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:13.767 05:07:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:13.767 05:07:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:13.767 05:07:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:13.767 05:07:03 -- common/autotest_common.sh@10 -- # set +x 00:08:13.767 05:07:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:13.767 05:07:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:13.767 05:07:03 -- accel/accel.sh@41 -- # local IFS=, 00:08:13.767 05:07:03 -- accel/accel.sh@42 -- # jq -r . 00:08:13.767 ************************************ 00:08:13.767 START TEST accel_dif_functional_tests 00:08:13.767 ************************************ 00:08:13.767 05:07:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:13.767 [2024-12-08 05:07:03.327145] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:13.767 [2024-12-08 05:07:03.327348] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69259 ] 00:08:13.767 [2024-12-08 05:07:03.464920] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:13.767 [2024-12-08 05:07:03.512202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:13.767 [2024-12-08 05:07:03.512338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:13.767 [2024-12-08 05:07:03.512343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.027 00:08:14.027 00:08:14.027 CUnit - A unit testing framework for C - Version 2.1-3 00:08:14.027 http://cunit.sourceforge.net/ 00:08:14.027 00:08:14.027 00:08:14.027 Suite: accel_dif 00:08:14.027 Test: verify: DIF generated, GUARD check ...passed 00:08:14.027 Test: verify: DIF generated, APPTAG check ...passed 00:08:14.027 Test: verify: DIF generated, REFTAG check ...passed 00:08:14.027 Test: verify: DIF not generated, GUARD check ...[2024-12-08 05:07:03.567945] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:14.027 passed 00:08:14.027 Test: verify: DIF not generated, APPTAG check ...[2024-12-08 05:07:03.568079] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:14.027 passed 00:08:14.027 Test: verify: DIF not generated, REFTAG check ...passed 00:08:14.027 Test: verify: APPTAG correct, APPTAG check ...passed 00:08:14.027 Test: verify: APPTAG incorrect, APPTAG check ...[2024-12-08 05:07:03.568126] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:14.027 [2024-12-08 05:07:03.568154] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:14.027 [2024-12-08 05:07:03.568179] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:14.027 [2024-12-08 05:07:03.568352] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:14.027 [2024-12-08 05:07:03.568426] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:08:14.027 passed 00:08:14.027 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:08:14.027 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:08:14.027 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:08:14.027 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-12-08 05:07:03.568698] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:08:14.027 passed 00:08:14.027 Test: generate copy: DIF generated, GUARD check ...passed 00:08:14.027 Test: generate copy: DIF generated, APTTAG check ...passed 00:08:14.027 Test: generate copy: DIF generated, REFTAG check ...passed 00:08:14.027 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:08:14.027 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:08:14.027 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:08:14.027 Test: generate copy: iovecs-len validate ...passed 00:08:14.027 Test: generate copy: buffer alignment validate ...passed 00:08:14.027 00:08:14.027 Run Summary: Type Total Ran Passed Failed Inactive 00:08:14.027 suites 1 1 n/a 0 0 00:08:14.027 tests 20 20 20 0 0 00:08:14.027 asserts 204 204 204 0 n/a 00:08:14.027 00:08:14.027 Elapsed time = 0.005 seconds 00:08:14.027 [2024-12-08 05:07:03.569231] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:08:14.027 00:08:14.027 real 0m0.444s 00:08:14.027 user 0m0.512s 00:08:14.027 sys 0m0.121s 00:08:14.027 05:07:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:14.027 05:07:03 -- common/autotest_common.sh@10 -- # set +x 00:08:14.027 ************************************ 00:08:14.027 END TEST accel_dif_functional_tests 00:08:14.027 ************************************ 00:08:14.027 00:08:14.027 real 0m57.759s 00:08:14.027 user 1m2.492s 00:08:14.027 sys 0m4.789s 00:08:14.027 05:07:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:14.027 05:07:03 -- common/autotest_common.sh@10 -- # set +x 00:08:14.027 ************************************ 00:08:14.027 END TEST accel 00:08:14.027 ************************************ 00:08:14.027 05:07:03 -- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:08:14.027 05:07:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:14.027 05:07:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:14.027 05:07:03 -- common/autotest_common.sh@10 -- # set +x 00:08:14.286 ************************************ 00:08:14.286 START TEST accel_rpc 00:08:14.286 ************************************ 00:08:14.286 05:07:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:08:14.286 * Looking for test storage... 00:08:14.286 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:08:14.286 05:07:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:14.286 05:07:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:14.286 05:07:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:14.286 05:07:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:14.286 05:07:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:14.286 05:07:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:14.286 05:07:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:14.286 05:07:03 -- scripts/common.sh@335 -- # IFS=.-: 00:08:14.286 05:07:03 -- scripts/common.sh@335 -- # read -ra ver1 00:08:14.286 05:07:03 -- scripts/common.sh@336 -- # IFS=.-: 00:08:14.286 05:07:03 -- scripts/common.sh@336 -- # read -ra ver2 00:08:14.286 05:07:03 -- scripts/common.sh@337 -- # local 'op=<' 00:08:14.286 05:07:03 -- scripts/common.sh@339 -- # ver1_l=2 00:08:14.286 05:07:03 -- scripts/common.sh@340 -- # ver2_l=1 00:08:14.286 05:07:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:14.286 05:07:03 -- scripts/common.sh@343 -- # case "$op" in 00:08:14.286 05:07:03 -- scripts/common.sh@344 -- # : 1 00:08:14.286 05:07:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:14.286 05:07:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:14.286 05:07:03 -- scripts/common.sh@364 -- # decimal 1 00:08:14.286 05:07:03 -- scripts/common.sh@352 -- # local d=1 00:08:14.286 05:07:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:14.286 05:07:03 -- scripts/common.sh@354 -- # echo 1 00:08:14.286 05:07:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:14.286 05:07:04 -- scripts/common.sh@365 -- # decimal 2 00:08:14.286 05:07:04 -- scripts/common.sh@352 -- # local d=2 00:08:14.286 05:07:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:14.286 05:07:04 -- scripts/common.sh@354 -- # echo 2 00:08:14.286 05:07:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:14.286 05:07:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:14.286 05:07:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:14.286 05:07:04 -- scripts/common.sh@367 -- # return 0 00:08:14.286 05:07:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:14.286 05:07:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:14.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.286 --rc genhtml_branch_coverage=1 00:08:14.286 --rc genhtml_function_coverage=1 00:08:14.286 --rc genhtml_legend=1 00:08:14.286 --rc geninfo_all_blocks=1 00:08:14.286 --rc geninfo_unexecuted_blocks=1 00:08:14.286 00:08:14.286 ' 00:08:14.286 05:07:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:14.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.286 --rc genhtml_branch_coverage=1 00:08:14.286 --rc genhtml_function_coverage=1 00:08:14.286 --rc genhtml_legend=1 00:08:14.286 --rc geninfo_all_blocks=1 00:08:14.286 --rc geninfo_unexecuted_blocks=1 00:08:14.286 00:08:14.286 ' 00:08:14.286 05:07:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:14.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.286 --rc genhtml_branch_coverage=1 00:08:14.286 --rc genhtml_function_coverage=1 00:08:14.286 --rc genhtml_legend=1 00:08:14.286 --rc geninfo_all_blocks=1 00:08:14.286 --rc geninfo_unexecuted_blocks=1 00:08:14.286 00:08:14.286 ' 00:08:14.286 05:07:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:14.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.286 --rc genhtml_branch_coverage=1 00:08:14.286 --rc genhtml_function_coverage=1 00:08:14.286 --rc genhtml_legend=1 00:08:14.286 --rc geninfo_all_blocks=1 00:08:14.286 --rc geninfo_unexecuted_blocks=1 00:08:14.286 00:08:14.286 ' 00:08:14.286 05:07:04 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:14.286 05:07:04 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=69336 00:08:14.286 05:07:04 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:08:14.286 05:07:04 -- accel/accel_rpc.sh@15 -- # waitforlisten 69336 00:08:14.286 05:07:04 -- common/autotest_common.sh@829 -- # '[' -z 69336 ']' 00:08:14.286 05:07:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.286 05:07:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:14.286 05:07:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.286 05:07:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:14.286 05:07:04 -- common/autotest_common.sh@10 -- # set +x 00:08:14.544 [2024-12-08 05:07:04.069753] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:14.544 [2024-12-08 05:07:04.069861] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69336 ] 00:08:14.544 [2024-12-08 05:07:04.213510] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.544 [2024-12-08 05:07:04.256902] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:14.544 [2024-12-08 05:07:04.257321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.544 05:07:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:14.544 05:07:04 -- common/autotest_common.sh@862 -- # return 0 00:08:14.544 05:07:04 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:08:14.544 05:07:04 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:08:14.544 05:07:04 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:08:14.544 05:07:04 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:08:14.544 05:07:04 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:08:14.544 05:07:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:14.544 05:07:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:14.544 05:07:04 -- common/autotest_common.sh@10 -- # set +x 00:08:14.544 ************************************ 00:08:14.544 START TEST accel_assign_opcode 00:08:14.544 ************************************ 00:08:14.544 05:07:04 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:08:14.544 05:07:04 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:08:14.544 05:07:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.544 05:07:04 -- common/autotest_common.sh@10 -- # set +x 00:08:14.803 [2024-12-08 05:07:04.333804] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:08:14.803 05:07:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.803 05:07:04 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:08:14.803 05:07:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.803 05:07:04 -- common/autotest_common.sh@10 -- # set +x 00:08:14.803 [2024-12-08 05:07:04.341803] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:08:14.803 05:07:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.803 05:07:04 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:08:14.803 05:07:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.803 05:07:04 -- common/autotest_common.sh@10 -- # set +x 00:08:14.803 05:07:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.803 05:07:04 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:08:14.803 05:07:04 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:08:14.803 05:07:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.803 05:07:04 -- common/autotest_common.sh@10 -- # set +x 00:08:14.803 05:07:04 -- accel/accel_rpc.sh@42 -- # grep software 00:08:14.803 05:07:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.803 software 00:08:14.803 ************************************ 00:08:14.803 END TEST accel_assign_opcode 00:08:14.803 ************************************ 00:08:14.803 00:08:14.803 real 0m0.217s 00:08:14.803 user 0m0.059s 00:08:14.803 sys 0m0.010s 00:08:14.803 05:07:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:14.803 05:07:04 -- common/autotest_common.sh@10 -- # set +x 00:08:14.803 05:07:04 -- accel/accel_rpc.sh@55 -- # killprocess 69336 00:08:14.803 05:07:04 -- common/autotest_common.sh@936 -- # '[' -z 69336 ']' 00:08:14.803 05:07:04 -- common/autotest_common.sh@940 -- # kill -0 69336 00:08:15.063 05:07:04 -- common/autotest_common.sh@941 -- # uname 00:08:15.063 05:07:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:15.063 05:07:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69336 00:08:15.063 killing process with pid 69336 00:08:15.063 05:07:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:15.063 05:07:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:15.063 05:07:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69336' 00:08:15.063 05:07:04 -- common/autotest_common.sh@955 -- # kill 69336 00:08:15.063 05:07:04 -- common/autotest_common.sh@960 -- # wait 69336 00:08:15.322 ************************************ 00:08:15.322 END TEST accel_rpc 00:08:15.322 ************************************ 00:08:15.322 00:08:15.322 real 0m1.062s 00:08:15.322 user 0m1.071s 00:08:15.322 sys 0m0.331s 00:08:15.322 05:07:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:15.322 05:07:04 -- common/autotest_common.sh@10 -- # set +x 00:08:15.322 05:07:04 -- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:15.322 05:07:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:15.322 05:07:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:15.322 05:07:04 -- common/autotest_common.sh@10 -- # set +x 00:08:15.322 ************************************ 00:08:15.322 START TEST app_cmdline 00:08:15.322 ************************************ 00:08:15.322 05:07:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:15.322 * Looking for test storage... 00:08:15.322 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:15.322 05:07:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:15.322 05:07:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:15.322 05:07:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:15.322 05:07:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:15.322 05:07:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:15.322 05:07:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:15.322 05:07:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:15.322 05:07:05 -- scripts/common.sh@335 -- # IFS=.-: 00:08:15.322 05:07:05 -- scripts/common.sh@335 -- # read -ra ver1 00:08:15.322 05:07:05 -- scripts/common.sh@336 -- # IFS=.-: 00:08:15.322 05:07:05 -- scripts/common.sh@336 -- # read -ra ver2 00:08:15.322 05:07:05 -- scripts/common.sh@337 -- # local 'op=<' 00:08:15.322 05:07:05 -- scripts/common.sh@339 -- # ver1_l=2 00:08:15.322 05:07:05 -- scripts/common.sh@340 -- # ver2_l=1 00:08:15.322 05:07:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:15.322 05:07:05 -- scripts/common.sh@343 -- # case "$op" in 00:08:15.322 05:07:05 -- scripts/common.sh@344 -- # : 1 00:08:15.322 05:07:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:15.322 05:07:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:15.322 05:07:05 -- scripts/common.sh@364 -- # decimal 1 00:08:15.322 05:07:05 -- scripts/common.sh@352 -- # local d=1 00:08:15.322 05:07:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:15.322 05:07:05 -- scripts/common.sh@354 -- # echo 1 00:08:15.322 05:07:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:15.322 05:07:05 -- scripts/common.sh@365 -- # decimal 2 00:08:15.322 05:07:05 -- scripts/common.sh@352 -- # local d=2 00:08:15.322 05:07:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:15.322 05:07:05 -- scripts/common.sh@354 -- # echo 2 00:08:15.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.322 05:07:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:15.322 05:07:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:15.322 05:07:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:15.322 05:07:05 -- scripts/common.sh@367 -- # return 0 00:08:15.322 05:07:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:15.322 05:07:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:15.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.322 --rc genhtml_branch_coverage=1 00:08:15.322 --rc genhtml_function_coverage=1 00:08:15.322 --rc genhtml_legend=1 00:08:15.322 --rc geninfo_all_blocks=1 00:08:15.322 --rc geninfo_unexecuted_blocks=1 00:08:15.322 00:08:15.322 ' 00:08:15.322 05:07:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:15.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.322 --rc genhtml_branch_coverage=1 00:08:15.322 --rc genhtml_function_coverage=1 00:08:15.322 --rc genhtml_legend=1 00:08:15.322 --rc geninfo_all_blocks=1 00:08:15.322 --rc geninfo_unexecuted_blocks=1 00:08:15.322 00:08:15.322 ' 00:08:15.322 05:07:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:15.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.322 --rc genhtml_branch_coverage=1 00:08:15.322 --rc genhtml_function_coverage=1 00:08:15.322 --rc genhtml_legend=1 00:08:15.322 --rc geninfo_all_blocks=1 00:08:15.322 --rc geninfo_unexecuted_blocks=1 00:08:15.322 00:08:15.322 ' 00:08:15.322 05:07:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:15.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.322 --rc genhtml_branch_coverage=1 00:08:15.322 --rc genhtml_function_coverage=1 00:08:15.322 --rc genhtml_legend=1 00:08:15.322 --rc geninfo_all_blocks=1 00:08:15.322 --rc geninfo_unexecuted_blocks=1 00:08:15.322 00:08:15.322 ' 00:08:15.322 05:07:05 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:15.322 05:07:05 -- app/cmdline.sh@17 -- # spdk_tgt_pid=69423 00:08:15.322 05:07:05 -- app/cmdline.sh@18 -- # waitforlisten 69423 00:08:15.322 05:07:05 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:15.322 05:07:05 -- common/autotest_common.sh@829 -- # '[' -z 69423 ']' 00:08:15.322 05:07:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.322 05:07:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:15.322 05:07:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.322 05:07:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:15.322 05:07:05 -- common/autotest_common.sh@10 -- # set +x 00:08:15.580 [2024-12-08 05:07:05.161761] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:15.580 [2024-12-08 05:07:05.162100] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69423 ] 00:08:15.580 [2024-12-08 05:07:05.298000] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.580 [2024-12-08 05:07:05.335902] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:15.580 [2024-12-08 05:07:05.336319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.515 05:07:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:16.515 05:07:06 -- common/autotest_common.sh@862 -- # return 0 00:08:16.515 05:07:06 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:16.774 { 00:08:16.774 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e", 00:08:16.774 "fields": { 00:08:16.774 "major": 24, 00:08:16.774 "minor": 1, 00:08:16.774 "patch": 1, 00:08:16.775 "suffix": "-pre", 00:08:16.775 "commit": "c13c99a5e" 00:08:16.775 } 00:08:16.775 } 00:08:16.775 05:07:06 -- app/cmdline.sh@22 -- # expected_methods=() 00:08:16.775 05:07:06 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:16.775 05:07:06 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:16.775 05:07:06 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:16.775 05:07:06 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:16.775 05:07:06 -- app/cmdline.sh@26 -- # sort 00:08:16.775 05:07:06 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:16.775 05:07:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.775 05:07:06 -- common/autotest_common.sh@10 -- # set +x 00:08:16.775 05:07:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.775 05:07:06 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:16.775 05:07:06 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:16.775 05:07:06 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:16.775 05:07:06 -- common/autotest_common.sh@650 -- # local es=0 00:08:16.775 05:07:06 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:16.775 05:07:06 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:16.775 05:07:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:16.775 05:07:06 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:16.775 05:07:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:16.775 05:07:06 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:16.775 05:07:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:16.775 05:07:06 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:16.775 05:07:06 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:16.775 05:07:06 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:17.034 request: 00:08:17.034 { 00:08:17.034 "method": "env_dpdk_get_mem_stats", 00:08:17.034 "req_id": 1 00:08:17.034 } 00:08:17.034 Got JSON-RPC error response 00:08:17.034 response: 00:08:17.034 { 00:08:17.034 "code": -32601, 00:08:17.034 "message": "Method not found" 00:08:17.034 } 00:08:17.034 05:07:06 -- common/autotest_common.sh@653 -- # es=1 00:08:17.034 05:07:06 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:17.034 05:07:06 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:17.034 05:07:06 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:17.034 05:07:06 -- app/cmdline.sh@1 -- # killprocess 69423 00:08:17.034 05:07:06 -- common/autotest_common.sh@936 -- # '[' -z 69423 ']' 00:08:17.034 05:07:06 -- common/autotest_common.sh@940 -- # kill -0 69423 00:08:17.034 05:07:06 -- common/autotest_common.sh@941 -- # uname 00:08:17.034 05:07:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:17.034 05:07:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69423 00:08:17.302 killing process with pid 69423 00:08:17.302 05:07:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:17.302 05:07:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:17.302 05:07:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69423' 00:08:17.302 05:07:06 -- common/autotest_common.sh@955 -- # kill 69423 00:08:17.302 05:07:06 -- common/autotest_common.sh@960 -- # wait 69423 00:08:17.302 ************************************ 00:08:17.302 END TEST app_cmdline 00:08:17.302 ************************************ 00:08:17.302 00:08:17.302 real 0m2.156s 00:08:17.302 user 0m2.846s 00:08:17.302 sys 0m0.382s 00:08:17.302 05:07:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:17.302 05:07:07 -- common/autotest_common.sh@10 -- # set +x 00:08:17.634 05:07:07 -- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:17.634 05:07:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:17.634 05:07:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:17.634 05:07:07 -- common/autotest_common.sh@10 -- # set +x 00:08:17.634 ************************************ 00:08:17.634 START TEST version 00:08:17.634 ************************************ 00:08:17.634 05:07:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:17.634 * Looking for test storage... 00:08:17.635 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:17.635 05:07:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:17.635 05:07:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:17.635 05:07:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:17.635 05:07:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:17.635 05:07:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:17.635 05:07:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:17.635 05:07:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:17.635 05:07:07 -- scripts/common.sh@335 -- # IFS=.-: 00:08:17.635 05:07:07 -- scripts/common.sh@335 -- # read -ra ver1 00:08:17.635 05:07:07 -- scripts/common.sh@336 -- # IFS=.-: 00:08:17.635 05:07:07 -- scripts/common.sh@336 -- # read -ra ver2 00:08:17.635 05:07:07 -- scripts/common.sh@337 -- # local 'op=<' 00:08:17.635 05:07:07 -- scripts/common.sh@339 -- # ver1_l=2 00:08:17.635 05:07:07 -- scripts/common.sh@340 -- # ver2_l=1 00:08:17.635 05:07:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:17.635 05:07:07 -- scripts/common.sh@343 -- # case "$op" in 00:08:17.635 05:07:07 -- scripts/common.sh@344 -- # : 1 00:08:17.635 05:07:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:17.635 05:07:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:17.635 05:07:07 -- scripts/common.sh@364 -- # decimal 1 00:08:17.635 05:07:07 -- scripts/common.sh@352 -- # local d=1 00:08:17.635 05:07:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:17.635 05:07:07 -- scripts/common.sh@354 -- # echo 1 00:08:17.635 05:07:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:17.635 05:07:07 -- scripts/common.sh@365 -- # decimal 2 00:08:17.635 05:07:07 -- scripts/common.sh@352 -- # local d=2 00:08:17.635 05:07:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:17.635 05:07:07 -- scripts/common.sh@354 -- # echo 2 00:08:17.635 05:07:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:17.635 05:07:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:17.635 05:07:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:17.635 05:07:07 -- scripts/common.sh@367 -- # return 0 00:08:17.635 05:07:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:17.635 05:07:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:17.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.635 --rc genhtml_branch_coverage=1 00:08:17.635 --rc genhtml_function_coverage=1 00:08:17.635 --rc genhtml_legend=1 00:08:17.635 --rc geninfo_all_blocks=1 00:08:17.635 --rc geninfo_unexecuted_blocks=1 00:08:17.635 00:08:17.635 ' 00:08:17.635 05:07:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:17.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.635 --rc genhtml_branch_coverage=1 00:08:17.635 --rc genhtml_function_coverage=1 00:08:17.635 --rc genhtml_legend=1 00:08:17.635 --rc geninfo_all_blocks=1 00:08:17.635 --rc geninfo_unexecuted_blocks=1 00:08:17.635 00:08:17.635 ' 00:08:17.635 05:07:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:17.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.635 --rc genhtml_branch_coverage=1 00:08:17.635 --rc genhtml_function_coverage=1 00:08:17.635 --rc genhtml_legend=1 00:08:17.635 --rc geninfo_all_blocks=1 00:08:17.635 --rc geninfo_unexecuted_blocks=1 00:08:17.635 00:08:17.635 ' 00:08:17.635 05:07:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:17.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.635 --rc genhtml_branch_coverage=1 00:08:17.635 --rc genhtml_function_coverage=1 00:08:17.635 --rc genhtml_legend=1 00:08:17.635 --rc geninfo_all_blocks=1 00:08:17.635 --rc geninfo_unexecuted_blocks=1 00:08:17.635 00:08:17.635 ' 00:08:17.635 05:07:07 -- app/version.sh@17 -- # get_header_version major 00:08:17.635 05:07:07 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:17.635 05:07:07 -- app/version.sh@14 -- # cut -f2 00:08:17.635 05:07:07 -- app/version.sh@14 -- # tr -d '"' 00:08:17.635 05:07:07 -- app/version.sh@17 -- # major=24 00:08:17.635 05:07:07 -- app/version.sh@18 -- # get_header_version minor 00:08:17.635 05:07:07 -- app/version.sh@14 -- # cut -f2 00:08:17.635 05:07:07 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:17.635 05:07:07 -- app/version.sh@14 -- # tr -d '"' 00:08:17.635 05:07:07 -- app/version.sh@18 -- # minor=1 00:08:17.635 05:07:07 -- app/version.sh@19 -- # get_header_version patch 00:08:17.635 05:07:07 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:17.635 05:07:07 -- app/version.sh@14 -- # cut -f2 00:08:17.635 05:07:07 -- app/version.sh@14 -- # tr -d '"' 00:08:17.635 05:07:07 -- app/version.sh@19 -- # patch=1 00:08:17.635 05:07:07 -- app/version.sh@20 -- # get_header_version suffix 00:08:17.635 05:07:07 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:17.635 05:07:07 -- app/version.sh@14 -- # cut -f2 00:08:17.635 05:07:07 -- app/version.sh@14 -- # tr -d '"' 00:08:17.635 05:07:07 -- app/version.sh@20 -- # suffix=-pre 00:08:17.635 05:07:07 -- app/version.sh@22 -- # version=24.1 00:08:17.635 05:07:07 -- app/version.sh@25 -- # (( patch != 0 )) 00:08:17.635 05:07:07 -- app/version.sh@25 -- # version=24.1.1 00:08:17.635 05:07:07 -- app/version.sh@28 -- # version=24.1.1rc0 00:08:17.635 05:07:07 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:17.635 05:07:07 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:17.635 05:07:07 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:08:17.635 05:07:07 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:08:17.635 00:08:17.635 real 0m0.244s 00:08:17.635 user 0m0.146s 00:08:17.635 sys 0m0.129s 00:08:17.635 ************************************ 00:08:17.635 END TEST version 00:08:17.635 ************************************ 00:08:17.635 05:07:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:17.635 05:07:07 -- common/autotest_common.sh@10 -- # set +x 00:08:17.895 05:07:07 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:08:17.895 05:07:07 -- spdk/autotest.sh@191 -- # uname -s 00:08:17.895 05:07:07 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:08:17.895 05:07:07 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:08:17.895 05:07:07 -- spdk/autotest.sh@192 -- # [[ 1 -eq 1 ]] 00:08:17.895 05:07:07 -- spdk/autotest.sh@198 -- # [[ 0 -eq 0 ]] 00:08:17.895 05:07:07 -- spdk/autotest.sh@199 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:08:17.895 05:07:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:17.895 05:07:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:17.895 05:07:07 -- common/autotest_common.sh@10 -- # set +x 00:08:17.895 ************************************ 00:08:17.895 START TEST spdk_dd 00:08:17.895 ************************************ 00:08:17.895 05:07:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:08:17.895 * Looking for test storage... 00:08:17.895 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:17.895 05:07:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:17.895 05:07:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:17.895 05:07:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:17.895 05:07:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:17.895 05:07:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:17.895 05:07:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:17.895 05:07:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:17.895 05:07:07 -- scripts/common.sh@335 -- # IFS=.-: 00:08:17.895 05:07:07 -- scripts/common.sh@335 -- # read -ra ver1 00:08:17.895 05:07:07 -- scripts/common.sh@336 -- # IFS=.-: 00:08:17.895 05:07:07 -- scripts/common.sh@336 -- # read -ra ver2 00:08:17.895 05:07:07 -- scripts/common.sh@337 -- # local 'op=<' 00:08:17.895 05:07:07 -- scripts/common.sh@339 -- # ver1_l=2 00:08:17.895 05:07:07 -- scripts/common.sh@340 -- # ver2_l=1 00:08:17.895 05:07:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:17.895 05:07:07 -- scripts/common.sh@343 -- # case "$op" in 00:08:17.895 05:07:07 -- scripts/common.sh@344 -- # : 1 00:08:17.895 05:07:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:17.895 05:07:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:17.895 05:07:07 -- scripts/common.sh@364 -- # decimal 1 00:08:17.895 05:07:07 -- scripts/common.sh@352 -- # local d=1 00:08:17.895 05:07:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:17.895 05:07:07 -- scripts/common.sh@354 -- # echo 1 00:08:17.895 05:07:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:17.895 05:07:07 -- scripts/common.sh@365 -- # decimal 2 00:08:17.895 05:07:07 -- scripts/common.sh@352 -- # local d=2 00:08:17.895 05:07:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:17.895 05:07:07 -- scripts/common.sh@354 -- # echo 2 00:08:17.895 05:07:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:17.895 05:07:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:17.895 05:07:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:17.895 05:07:07 -- scripts/common.sh@367 -- # return 0 00:08:17.895 05:07:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:17.895 05:07:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:17.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.895 --rc genhtml_branch_coverage=1 00:08:17.895 --rc genhtml_function_coverage=1 00:08:17.895 --rc genhtml_legend=1 00:08:17.895 --rc geninfo_all_blocks=1 00:08:17.895 --rc geninfo_unexecuted_blocks=1 00:08:17.895 00:08:17.895 ' 00:08:17.895 05:07:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:17.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.895 --rc genhtml_branch_coverage=1 00:08:17.895 --rc genhtml_function_coverage=1 00:08:17.895 --rc genhtml_legend=1 00:08:17.895 --rc geninfo_all_blocks=1 00:08:17.895 --rc geninfo_unexecuted_blocks=1 00:08:17.895 00:08:17.895 ' 00:08:17.895 05:07:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:17.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.895 --rc genhtml_branch_coverage=1 00:08:17.895 --rc genhtml_function_coverage=1 00:08:17.895 --rc genhtml_legend=1 00:08:17.895 --rc geninfo_all_blocks=1 00:08:17.895 --rc geninfo_unexecuted_blocks=1 00:08:17.895 00:08:17.895 ' 00:08:17.895 05:07:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:17.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.895 --rc genhtml_branch_coverage=1 00:08:17.895 --rc genhtml_function_coverage=1 00:08:17.895 --rc genhtml_legend=1 00:08:17.895 --rc geninfo_all_blocks=1 00:08:17.895 --rc geninfo_unexecuted_blocks=1 00:08:17.895 00:08:17.895 ' 00:08:17.895 05:07:07 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:17.895 05:07:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:17.895 05:07:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:17.895 05:07:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:17.895 05:07:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.895 05:07:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.895 05:07:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.895 05:07:07 -- paths/export.sh@5 -- # export PATH 00:08:17.895 05:07:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.895 05:07:07 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:18.155 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:18.416 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:18.416 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:18.416 05:07:07 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:08:18.416 05:07:07 -- dd/dd.sh@11 -- # nvme_in_userspace 00:08:18.416 05:07:07 -- scripts/common.sh@311 -- # local bdf bdfs 00:08:18.416 05:07:07 -- scripts/common.sh@312 -- # local nvmes 00:08:18.416 05:07:07 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:08:18.416 05:07:07 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:08:18.416 05:07:07 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:08:18.416 05:07:07 -- scripts/common.sh@297 -- # local bdf= 00:08:18.416 05:07:07 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:08:18.416 05:07:07 -- scripts/common.sh@232 -- # local class 00:08:18.416 05:07:07 -- scripts/common.sh@233 -- # local subclass 00:08:18.416 05:07:07 -- scripts/common.sh@234 -- # local progif 00:08:18.416 05:07:07 -- scripts/common.sh@235 -- # printf %02x 1 00:08:18.416 05:07:08 -- scripts/common.sh@235 -- # class=01 00:08:18.416 05:07:08 -- scripts/common.sh@236 -- # printf %02x 8 00:08:18.416 05:07:08 -- scripts/common.sh@236 -- # subclass=08 00:08:18.416 05:07:08 -- scripts/common.sh@237 -- # printf %02x 2 00:08:18.416 05:07:08 -- scripts/common.sh@237 -- # progif=02 00:08:18.416 05:07:08 -- scripts/common.sh@239 -- # hash lspci 00:08:18.416 05:07:08 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:08:18.416 05:07:08 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:08:18.416 05:07:08 -- scripts/common.sh@242 -- # grep -i -- -p02 00:08:18.416 05:07:08 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:08:18.416 05:07:08 -- scripts/common.sh@244 -- # tr -d '"' 00:08:18.416 05:07:08 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:08:18.416 05:07:08 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:08:18.416 05:07:08 -- scripts/common.sh@15 -- # local i 00:08:18.416 05:07:08 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:08:18.416 05:07:08 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:08:18.416 05:07:08 -- scripts/common.sh@24 -- # return 0 00:08:18.416 05:07:08 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:08:18.416 05:07:08 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:08:18.416 05:07:08 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:08:18.416 05:07:08 -- scripts/common.sh@15 -- # local i 00:08:18.416 05:07:08 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:08:18.416 05:07:08 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:08:18.416 05:07:08 -- scripts/common.sh@24 -- # return 0 00:08:18.416 05:07:08 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:08:18.416 05:07:08 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:08:18.416 05:07:08 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:08:18.416 05:07:08 -- scripts/common.sh@322 -- # uname -s 00:08:18.416 05:07:08 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:08:18.416 05:07:08 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:08:18.416 05:07:08 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:08:18.416 05:07:08 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:08:18.416 05:07:08 -- scripts/common.sh@322 -- # uname -s 00:08:18.416 05:07:08 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:08:18.416 05:07:08 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:08:18.416 05:07:08 -- scripts/common.sh@327 -- # (( 2 )) 00:08:18.416 05:07:08 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:08:18.416 05:07:08 -- dd/dd.sh@13 -- # check_liburing 00:08:18.416 05:07:08 -- dd/common.sh@139 -- # local lib so 00:08:18.416 05:07:08 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:08:18.416 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.416 05:07:08 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:08:18.416 05:07:08 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.416 05:07:08 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:08:18.416 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.416 05:07:08 -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.5.0 == liburing.so.* ]] 00:08:18.416 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.416 05:07:08 -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.5.0 == liburing.so.* ]] 00:08:18.416 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.416 05:07:08 -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.6.0 == liburing.so.* ]] 00:08:18.416 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.416 05:07:08 -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.5.0 == liburing.so.* ]] 00:08:18.416 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.416 05:07:08 -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.5.0 == liburing.so.* ]] 00:08:18.416 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.416 05:07:08 -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.5.0 == liburing.so.* ]] 00:08:18.416 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.416 05:07:08 -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.5.0 == liburing.so.* ]] 00:08:18.416 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.416 05:07:08 -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.5.0 == liburing.so.* ]] 00:08:18.416 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.416 05:07:08 -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.5.0 == liburing.so.* ]] 00:08:18.416 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.416 05:07:08 -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.5.0 == liburing.so.* ]] 00:08:18.416 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.416 05:07:08 -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.5.0 == liburing.so.* ]] 00:08:18.416 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.416 05:07:08 -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.5.0 == liburing.so.* ]] 00:08:18.416 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.416 05:07:08 -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.9.0 == liburing.so.* ]] 00:08:18.416 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.416 05:07:08 -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.10.1 == liburing.so.* ]] 00:08:18.416 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.416 05:07:08 -- dd/common.sh@143 -- # [[ libspdk_lvol.so.9.1 == liburing.so.* ]] 00:08:18.416 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.416 05:07:08 -- dd/common.sh@143 -- # [[ libspdk_blob.so.10.1 == liburing.so.* ]] 00:08:18.416 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.416 05:07:08 -- dd/common.sh@143 -- # [[ libspdk_nvme.so.12.0 == liburing.so.* ]] 00:08:18.416 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.416 05:07:08 -- dd/common.sh@143 -- # [[ libspdk_rdma.so.5.0 == liburing.so.* ]] 00:08:18.416 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.416 05:07:08 -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.5.0 == liburing.so.* ]] 00:08:18.416 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.416 05:07:08 -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.5.0 == liburing.so.* ]] 00:08:18.416 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.416 05:07:08 -- dd/common.sh@143 -- # [[ libspdk_ftl.so.8.0 == liburing.so.* ]] 00:08:18.416 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.416 05:07:08 -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.5.0 == liburing.so.* ]] 00:08:18.416 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.416 05:07:08 -- dd/common.sh@143 -- # [[ libspdk_virtio.so.6.0 == liburing.so.* ]] 00:08:18.416 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.416 05:07:08 -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.4.0 == liburing.so.* ]] 00:08:18.416 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.416 05:07:08 -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.5.0 == liburing.so.* ]] 00:08:18.416 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.416 05:07:08 -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.5.0 == liburing.so.* ]] 00:08:18.416 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.416 05:07:08 -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.1.0 == liburing.so.* ]] 00:08:18.416 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.416 05:07:08 -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.5.0 == liburing.so.* ]] 00:08:18.416 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.416 05:07:08 -- dd/common.sh@143 -- # [[ libspdk_ioat.so.6.0 == liburing.so.* ]] 00:08:18.416 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.416 05:07:08 -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.4.0 == liburing.so.* ]] 00:08:18.416 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.416 05:07:08 -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.2.0 == liburing.so.* ]] 00:08:18.416 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.416 05:07:08 -- dd/common.sh@143 -- # [[ libspdk_idxd.so.11.0 == liburing.so.* ]] 00:08:18.416 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.416 05:07:08 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.3.0 == liburing.so.* ]] 00:08:18.416 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.417 05:07:08 -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.13.0 == liburing.so.* ]] 00:08:18.417 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.417 05:07:08 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.3.0 == liburing.so.* ]] 00:08:18.417 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.417 05:07:08 -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.3.0 == liburing.so.* ]] 00:08:18.417 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.417 05:07:08 -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.5.0 == liburing.so.* ]] 00:08:18.417 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.417 05:07:08 -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.4.0 == liburing.so.* ]] 00:08:18.417 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.417 05:07:08 -- dd/common.sh@143 -- # [[ libspdk_event.so.12.0 == liburing.so.* ]] 00:08:18.417 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.417 05:07:08 -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.5.0 == liburing.so.* ]] 00:08:18.417 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.417 05:07:08 -- dd/common.sh@143 -- # [[ libspdk_bdev.so.14.0 == liburing.so.* ]] 00:08:18.417 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.417 05:07:08 -- dd/common.sh@143 -- # [[ libspdk_notify.so.5.0 == liburing.so.* ]] 00:08:18.417 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.417 05:07:08 -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.5.0 == liburing.so.* ]] 00:08:18.417 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.417 05:07:08 -- dd/common.sh@143 -- # [[ libspdk_accel.so.14.0 == liburing.so.* ]] 00:08:18.417 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.417 05:07:08 -- dd/common.sh@143 -- # [[ libspdk_dma.so.3.0 == liburing.so.* ]] 00:08:18.417 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.417 05:07:08 -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.5.0 == liburing.so.* ]] 00:08:18.417 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.417 05:07:08 -- dd/common.sh@143 -- # [[ libspdk_vmd.so.5.0 == liburing.so.* ]] 00:08:18.417 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.417 05:07:08 -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.4.0 == liburing.so.* ]] 00:08:18.417 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.417 05:07:08 -- dd/common.sh@143 -- # [[ libspdk_sock.so.8.0 == liburing.so.* ]] 00:08:18.417 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.417 05:07:08 -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.2.0 == liburing.so.* ]] 00:08:18.417 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.417 05:07:08 -- dd/common.sh@143 -- # [[ libspdk_init.so.4.0 == liburing.so.* ]] 00:08:18.417 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.417 05:07:08 -- dd/common.sh@143 -- # [[ libspdk_thread.so.9.0 == liburing.so.* ]] 00:08:18.417 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.417 05:07:08 -- dd/common.sh@143 -- # [[ libspdk_trace.so.9.0 == liburing.so.* ]] 00:08:18.417 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.417 05:07:08 -- dd/common.sh@143 -- # [[ libspdk_rpc.so.5.0 == liburing.so.* ]] 00:08:18.417 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.417 05:07:08 -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.5.1 == liburing.so.* ]] 00:08:18.417 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.417 05:07:08 -- dd/common.sh@143 -- # [[ libspdk_json.so.5.1 == liburing.so.* ]] 00:08:18.417 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.417 05:07:08 -- dd/common.sh@143 -- # [[ libspdk_util.so.8.0 == liburing.so.* ]] 00:08:18.417 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.417 05:07:08 -- dd/common.sh@143 -- # [[ libspdk_log.so.6.1 == liburing.so.* ]] 00:08:18.417 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.417 05:07:08 -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:08:18.417 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.417 05:07:08 -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:08:18.417 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.417 05:07:08 -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:08:18.417 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.417 05:07:08 -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:08:18.417 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.417 05:07:08 -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:08:18.417 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.417 05:07:08 -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:08:18.417 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.417 05:07:08 -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:08:18.417 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.417 05:07:08 -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:08:18.417 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.417 05:07:08 -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:08:18.417 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.417 05:07:08 -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:08:18.417 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.417 05:07:08 -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:08:18.417 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.417 05:07:08 -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:08:18.417 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.417 05:07:08 -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:08:18.417 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.417 05:07:08 -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:08:18.417 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.417 05:07:08 -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:08:18.417 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.417 05:07:08 -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:08:18.417 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.417 05:07:08 -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:08:18.417 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.417 05:07:08 -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:08:18.417 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.417 05:07:08 -- dd/common.sh@143 -- # [[ libisal_crypto.so.2 == liburing.so.* ]] 00:08:18.417 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.417 05:07:08 -- dd/common.sh@143 -- # [[ libaccel-config.so.1 == liburing.so.* ]] 00:08:18.417 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.417 05:07:08 -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:08:18.417 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.417 05:07:08 -- dd/common.sh@143 -- # [[ libiscsi.so.9 == liburing.so.* ]] 00:08:18.417 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.417 05:07:08 -- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]] 00:08:18.417 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.417 05:07:08 -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:08:18.417 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.417 05:07:08 -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:08:18.417 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.417 05:07:08 -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:08:18.417 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.417 05:07:08 -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:08:18.417 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.417 05:07:08 -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:08:18.417 05:07:08 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.417 05:07:08 -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:08:18.417 05:07:08 -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:08:18.417 * spdk_dd linked to liburing 00:08:18.417 05:07:08 -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:08:18.417 05:07:08 -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:08:18.417 05:07:08 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:18.417 05:07:08 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:18.417 05:07:08 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:18.417 05:07:08 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:18.417 05:07:08 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:08:18.417 05:07:08 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:18.417 05:07:08 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:18.417 05:07:08 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:18.417 05:07:08 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:18.417 05:07:08 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:18.417 05:07:08 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:18.417 05:07:08 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:18.417 05:07:08 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:18.417 05:07:08 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:18.417 05:07:08 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:18.417 05:07:08 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:18.417 05:07:08 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:18.417 05:07:08 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:18.417 05:07:08 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:08:18.417 05:07:08 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:18.417 05:07:08 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:18.417 05:07:08 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:18.417 05:07:08 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:18.417 05:07:08 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:18.417 05:07:08 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:18.417 05:07:08 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:18.417 05:07:08 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:18.417 05:07:08 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:18.417 05:07:08 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:18.417 05:07:08 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:18.417 05:07:08 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:18.417 05:07:08 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:18.417 05:07:08 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:18.417 05:07:08 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:18.417 05:07:08 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:18.417 05:07:08 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:08:18.417 05:07:08 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:18.417 05:07:08 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:18.417 05:07:08 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:18.418 05:07:08 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:18.418 05:07:08 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:08:18.418 05:07:08 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:18.418 05:07:08 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:18.418 05:07:08 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:18.418 05:07:08 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:18.418 05:07:08 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:08:18.418 05:07:08 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:08:18.418 05:07:08 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:18.418 05:07:08 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:08:18.418 05:07:08 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:08:18.418 05:07:08 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:08:18.418 05:07:08 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:08:18.418 05:07:08 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=y 00:08:18.418 05:07:08 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:08:18.418 05:07:08 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:08:18.418 05:07:08 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:08:18.418 05:07:08 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:08:18.418 05:07:08 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:08:18.418 05:07:08 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:08:18.418 05:07:08 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:08:18.418 05:07:08 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:08:18.418 05:07:08 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:08:18.418 05:07:08 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:08:18.418 05:07:08 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:08:18.418 05:07:08 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:08:18.418 05:07:08 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:18.418 05:07:08 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:08:18.418 05:07:08 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:08:18.418 05:07:08 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:08:18.418 05:07:08 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:08:18.418 05:07:08 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:08:18.418 05:07:08 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:08:18.418 05:07:08 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:08:18.418 05:07:08 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:08:18.418 05:07:08 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:08:18.418 05:07:08 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:08:18.418 05:07:08 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:18.418 05:07:08 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:08:18.418 05:07:08 -- common/build_config.sh@79 -- # CONFIG_URING=y 00:08:18.418 05:07:08 -- dd/common.sh@149 -- # [[ y != y ]] 00:08:18.418 05:07:08 -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:08:18.418 05:07:08 -- dd/common.sh@156 -- # export liburing_in_use=1 00:08:18.418 05:07:08 -- dd/common.sh@156 -- # liburing_in_use=1 00:08:18.418 05:07:08 -- dd/common.sh@157 -- # return 0 00:08:18.418 05:07:08 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:08:18.418 05:07:08 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 0000:00:07.0 00:08:18.418 05:07:08 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:18.418 05:07:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:18.418 05:07:08 -- common/autotest_common.sh@10 -- # set +x 00:08:18.418 ************************************ 00:08:18.418 START TEST spdk_dd_basic_rw 00:08:18.418 ************************************ 00:08:18.418 05:07:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 0000:00:07.0 00:08:18.418 * Looking for test storage... 00:08:18.418 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:18.418 05:07:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:18.418 05:07:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:18.418 05:07:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:18.676 05:07:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:18.676 05:07:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:18.676 05:07:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:18.676 05:07:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:18.676 05:07:08 -- scripts/common.sh@335 -- # IFS=.-: 00:08:18.676 05:07:08 -- scripts/common.sh@335 -- # read -ra ver1 00:08:18.676 05:07:08 -- scripts/common.sh@336 -- # IFS=.-: 00:08:18.676 05:07:08 -- scripts/common.sh@336 -- # read -ra ver2 00:08:18.676 05:07:08 -- scripts/common.sh@337 -- # local 'op=<' 00:08:18.676 05:07:08 -- scripts/common.sh@339 -- # ver1_l=2 00:08:18.676 05:07:08 -- scripts/common.sh@340 -- # ver2_l=1 00:08:18.676 05:07:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:18.676 05:07:08 -- scripts/common.sh@343 -- # case "$op" in 00:08:18.676 05:07:08 -- scripts/common.sh@344 -- # : 1 00:08:18.676 05:07:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:18.676 05:07:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:18.676 05:07:08 -- scripts/common.sh@364 -- # decimal 1 00:08:18.676 05:07:08 -- scripts/common.sh@352 -- # local d=1 00:08:18.676 05:07:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:18.676 05:07:08 -- scripts/common.sh@354 -- # echo 1 00:08:18.676 05:07:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:18.676 05:07:08 -- scripts/common.sh@365 -- # decimal 2 00:08:18.676 05:07:08 -- scripts/common.sh@352 -- # local d=2 00:08:18.676 05:07:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:18.676 05:07:08 -- scripts/common.sh@354 -- # echo 2 00:08:18.676 05:07:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:18.676 05:07:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:18.676 05:07:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:18.676 05:07:08 -- scripts/common.sh@367 -- # return 0 00:08:18.676 05:07:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:18.676 05:07:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:18.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.676 --rc genhtml_branch_coverage=1 00:08:18.676 --rc genhtml_function_coverage=1 00:08:18.676 --rc genhtml_legend=1 00:08:18.676 --rc geninfo_all_blocks=1 00:08:18.676 --rc geninfo_unexecuted_blocks=1 00:08:18.676 00:08:18.676 ' 00:08:18.676 05:07:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:18.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.676 --rc genhtml_branch_coverage=1 00:08:18.676 --rc genhtml_function_coverage=1 00:08:18.676 --rc genhtml_legend=1 00:08:18.676 --rc geninfo_all_blocks=1 00:08:18.676 --rc geninfo_unexecuted_blocks=1 00:08:18.676 00:08:18.676 ' 00:08:18.676 05:07:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:18.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.676 --rc genhtml_branch_coverage=1 00:08:18.676 --rc genhtml_function_coverage=1 00:08:18.676 --rc genhtml_legend=1 00:08:18.676 --rc geninfo_all_blocks=1 00:08:18.676 --rc geninfo_unexecuted_blocks=1 00:08:18.676 00:08:18.676 ' 00:08:18.676 05:07:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:18.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.676 --rc genhtml_branch_coverage=1 00:08:18.676 --rc genhtml_function_coverage=1 00:08:18.676 --rc genhtml_legend=1 00:08:18.676 --rc geninfo_all_blocks=1 00:08:18.676 --rc geninfo_unexecuted_blocks=1 00:08:18.676 00:08:18.676 ' 00:08:18.676 05:07:08 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:18.676 05:07:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:18.676 05:07:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:18.676 05:07:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:18.676 05:07:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.677 05:07:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.677 05:07:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.677 05:07:08 -- paths/export.sh@5 -- # export PATH 00:08:18.677 05:07:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.677 05:07:08 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:08:18.677 05:07:08 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:08:18.677 05:07:08 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:08:18.677 05:07:08 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:06.0 00:08:18.677 05:07:08 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:08:18.677 05:07:08 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:08:18.677 05:07:08 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:08:18.677 05:07:08 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:18.677 05:07:08 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:18.677 05:07:08 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:06.0 00:08:18.677 05:07:08 -- dd/common.sh@124 -- # local pci=0000:00:06.0 lbaf id 00:08:18.677 05:07:08 -- dd/common.sh@126 -- # mapfile -t id 00:08:18.677 05:07:08 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:06.0' 00:08:18.936 05:07:08 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 96 Data Units Written: 9 Host Read Commands: 2188 Host Write Commands: 95 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:08:18.936 05:07:08 -- dd/common.sh@130 -- # lbaf=04 00:08:18.937 05:07:08 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 96 Data Units Written: 9 Host Read Commands: 2188 Host Write Commands: 95 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:08:18.937 05:07:08 -- dd/common.sh@132 -- # lbaf=4096 00:08:18.937 05:07:08 -- dd/common.sh@134 -- # echo 4096 00:08:18.937 05:07:08 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:08:18.937 05:07:08 -- dd/basic_rw.sh@96 -- # : 00:08:18.937 05:07:08 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:18.937 05:07:08 -- dd/basic_rw.sh@96 -- # gen_conf 00:08:18.937 05:07:08 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:08:18.937 05:07:08 -- dd/common.sh@31 -- # xtrace_disable 00:08:18.937 05:07:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:18.937 05:07:08 -- common/autotest_common.sh@10 -- # set +x 00:08:18.937 05:07:08 -- common/autotest_common.sh@10 -- # set +x 00:08:18.937 ************************************ 00:08:18.937 START TEST dd_bs_lt_native_bs 00:08:18.937 ************************************ 00:08:18.937 05:07:08 -- common/autotest_common.sh@1114 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:18.937 05:07:08 -- common/autotest_common.sh@650 -- # local es=0 00:08:18.937 05:07:08 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:18.937 05:07:08 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.937 05:07:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.937 05:07:08 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.937 05:07:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.937 05:07:08 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.937 05:07:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.937 05:07:08 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.937 05:07:08 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:18.937 05:07:08 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:18.937 { 00:08:18.937 "subsystems": [ 00:08:18.937 { 00:08:18.937 "subsystem": "bdev", 00:08:18.937 "config": [ 00:08:18.937 { 00:08:18.937 "params": { 00:08:18.937 "trtype": "pcie", 00:08:18.937 "traddr": "0000:00:06.0", 00:08:18.937 "name": "Nvme0" 00:08:18.937 }, 00:08:18.937 "method": "bdev_nvme_attach_controller" 00:08:18.937 }, 00:08:18.937 { 00:08:18.937 "method": "bdev_wait_for_examine" 00:08:18.937 } 00:08:18.937 ] 00:08:18.937 } 00:08:18.937 ] 00:08:18.937 } 00:08:18.937 [2024-12-08 05:07:08.534519] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:18.937 [2024-12-08 05:07:08.534598] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69767 ] 00:08:18.938 [2024-12-08 05:07:08.676195] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.938 [2024-12-08 05:07:08.716474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.196 [2024-12-08 05:07:08.833608] spdk_dd.c:1145:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:08:19.196 [2024-12-08 05:07:08.833716] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:19.196 [2024-12-08 05:07:08.905286] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:19.196 05:07:08 -- common/autotest_common.sh@653 -- # es=234 00:08:19.196 05:07:08 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:19.196 05:07:08 -- common/autotest_common.sh@662 -- # es=106 00:08:19.196 05:07:08 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:19.196 05:07:08 -- common/autotest_common.sh@670 -- # es=1 00:08:19.196 05:07:08 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:19.196 00:08:19.196 real 0m0.488s 00:08:19.196 user 0m0.325s 00:08:19.196 sys 0m0.123s 00:08:19.196 05:07:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:19.196 05:07:08 -- common/autotest_common.sh@10 -- # set +x 00:08:19.196 ************************************ 00:08:19.196 END TEST dd_bs_lt_native_bs 00:08:19.196 ************************************ 00:08:19.454 05:07:09 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:08:19.455 05:07:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:19.455 05:07:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:19.455 05:07:09 -- common/autotest_common.sh@10 -- # set +x 00:08:19.455 ************************************ 00:08:19.455 START TEST dd_rw 00:08:19.455 ************************************ 00:08:19.455 05:07:09 -- common/autotest_common.sh@1114 -- # basic_rw 4096 00:08:19.455 05:07:09 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:08:19.455 05:07:09 -- dd/basic_rw.sh@12 -- # local count size 00:08:19.455 05:07:09 -- dd/basic_rw.sh@13 -- # local qds bss 00:08:19.455 05:07:09 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:08:19.455 05:07:09 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:08:19.455 05:07:09 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:08:19.455 05:07:09 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:08:19.455 05:07:09 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:08:19.455 05:07:09 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:08:19.455 05:07:09 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:08:19.455 05:07:09 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:19.455 05:07:09 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:19.455 05:07:09 -- dd/basic_rw.sh@23 -- # count=15 00:08:19.455 05:07:09 -- dd/basic_rw.sh@24 -- # count=15 00:08:19.455 05:07:09 -- dd/basic_rw.sh@25 -- # size=61440 00:08:19.455 05:07:09 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:08:19.455 05:07:09 -- dd/common.sh@98 -- # xtrace_disable 00:08:19.455 05:07:09 -- common/autotest_common.sh@10 -- # set +x 00:08:20.019 05:07:09 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:08:20.019 05:07:09 -- dd/basic_rw.sh@30 -- # gen_conf 00:08:20.019 05:07:09 -- dd/common.sh@31 -- # xtrace_disable 00:08:20.019 05:07:09 -- common/autotest_common.sh@10 -- # set +x 00:08:20.277 [2024-12-08 05:07:09.843423] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:20.277 [2024-12-08 05:07:09.843516] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69803 ] 00:08:20.277 { 00:08:20.277 "subsystems": [ 00:08:20.277 { 00:08:20.277 "subsystem": "bdev", 00:08:20.277 "config": [ 00:08:20.277 { 00:08:20.277 "params": { 00:08:20.277 "trtype": "pcie", 00:08:20.277 "traddr": "0000:00:06.0", 00:08:20.277 "name": "Nvme0" 00:08:20.278 }, 00:08:20.278 "method": "bdev_nvme_attach_controller" 00:08:20.278 }, 00:08:20.278 { 00:08:20.278 "method": "bdev_wait_for_examine" 00:08:20.278 } 00:08:20.278 ] 00:08:20.278 } 00:08:20.278 ] 00:08:20.278 } 00:08:20.278 [2024-12-08 05:07:09.987925] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.278 [2024-12-08 05:07:10.027417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.535  [2024-12-08T05:07:10.321Z] Copying: 60/60 [kB] (average 19 MBps) 00:08:20.535 00:08:20.535 05:07:10 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:08:20.535 05:07:10 -- dd/basic_rw.sh@37 -- # gen_conf 00:08:20.535 05:07:10 -- dd/common.sh@31 -- # xtrace_disable 00:08:20.535 05:07:10 -- common/autotest_common.sh@10 -- # set +x 00:08:20.841 [2024-12-08 05:07:10.361910] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:20.841 [2024-12-08 05:07:10.362012] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69816 ] 00:08:20.841 { 00:08:20.841 "subsystems": [ 00:08:20.841 { 00:08:20.841 "subsystem": "bdev", 00:08:20.841 "config": [ 00:08:20.841 { 00:08:20.841 "params": { 00:08:20.841 "trtype": "pcie", 00:08:20.841 "traddr": "0000:00:06.0", 00:08:20.841 "name": "Nvme0" 00:08:20.841 }, 00:08:20.841 "method": "bdev_nvme_attach_controller" 00:08:20.841 }, 00:08:20.841 { 00:08:20.841 "method": "bdev_wait_for_examine" 00:08:20.841 } 00:08:20.841 ] 00:08:20.841 } 00:08:20.841 ] 00:08:20.841 } 00:08:20.841 [2024-12-08 05:07:10.503172] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.841 [2024-12-08 05:07:10.542956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.114  [2024-12-08T05:07:10.900Z] Copying: 60/60 [kB] (average 29 MBps) 00:08:21.114 00:08:21.114 05:07:10 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:21.114 05:07:10 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:08:21.114 05:07:10 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:21.114 05:07:10 -- dd/common.sh@11 -- # local nvme_ref= 00:08:21.114 05:07:10 -- dd/common.sh@12 -- # local size=61440 00:08:21.114 05:07:10 -- dd/common.sh@14 -- # local bs=1048576 00:08:21.114 05:07:10 -- dd/common.sh@15 -- # local count=1 00:08:21.114 05:07:10 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:21.114 05:07:10 -- dd/common.sh@18 -- # gen_conf 00:08:21.114 05:07:10 -- dd/common.sh@31 -- # xtrace_disable 00:08:21.114 05:07:10 -- common/autotest_common.sh@10 -- # set +x 00:08:21.114 [2024-12-08 05:07:10.866727] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:21.114 [2024-12-08 05:07:10.866887] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69829 ] 00:08:21.114 { 00:08:21.114 "subsystems": [ 00:08:21.114 { 00:08:21.114 "subsystem": "bdev", 00:08:21.114 "config": [ 00:08:21.114 { 00:08:21.114 "params": { 00:08:21.114 "trtype": "pcie", 00:08:21.114 "traddr": "0000:00:06.0", 00:08:21.114 "name": "Nvme0" 00:08:21.114 }, 00:08:21.114 "method": "bdev_nvme_attach_controller" 00:08:21.114 }, 00:08:21.114 { 00:08:21.114 "method": "bdev_wait_for_examine" 00:08:21.114 } 00:08:21.114 ] 00:08:21.114 } 00:08:21.114 ] 00:08:21.114 } 00:08:21.372 [2024-12-08 05:07:11.000347] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.372 [2024-12-08 05:07:11.040304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.372  [2024-12-08T05:07:11.416Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:21.630 00:08:21.630 05:07:11 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:21.630 05:07:11 -- dd/basic_rw.sh@23 -- # count=15 00:08:21.630 05:07:11 -- dd/basic_rw.sh@24 -- # count=15 00:08:21.630 05:07:11 -- dd/basic_rw.sh@25 -- # size=61440 00:08:21.630 05:07:11 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:08:21.630 05:07:11 -- dd/common.sh@98 -- # xtrace_disable 00:08:21.630 05:07:11 -- common/autotest_common.sh@10 -- # set +x 00:08:22.196 05:07:11 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:08:22.196 05:07:11 -- dd/basic_rw.sh@30 -- # gen_conf 00:08:22.196 05:07:11 -- dd/common.sh@31 -- # xtrace_disable 00:08:22.196 05:07:11 -- common/autotest_common.sh@10 -- # set +x 00:08:22.196 [2024-12-08 05:07:11.965708] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:22.196 [2024-12-08 05:07:11.965814] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69847 ] 00:08:22.196 { 00:08:22.196 "subsystems": [ 00:08:22.196 { 00:08:22.196 "subsystem": "bdev", 00:08:22.196 "config": [ 00:08:22.196 { 00:08:22.196 "params": { 00:08:22.196 "trtype": "pcie", 00:08:22.196 "traddr": "0000:00:06.0", 00:08:22.196 "name": "Nvme0" 00:08:22.196 }, 00:08:22.196 "method": "bdev_nvme_attach_controller" 00:08:22.196 }, 00:08:22.196 { 00:08:22.196 "method": "bdev_wait_for_examine" 00:08:22.196 } 00:08:22.196 ] 00:08:22.196 } 00:08:22.196 ] 00:08:22.196 } 00:08:22.454 [2024-12-08 05:07:12.109784] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.454 [2024-12-08 05:07:12.150452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.712  [2024-12-08T05:07:12.498Z] Copying: 60/60 [kB] (average 58 MBps) 00:08:22.712 00:08:22.712 05:07:12 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:08:22.712 05:07:12 -- dd/basic_rw.sh@37 -- # gen_conf 00:08:22.712 05:07:12 -- dd/common.sh@31 -- # xtrace_disable 00:08:22.712 05:07:12 -- common/autotest_common.sh@10 -- # set +x 00:08:22.712 [2024-12-08 05:07:12.464239] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:22.712 [2024-12-08 05:07:12.464339] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69860 ] 00:08:22.712 { 00:08:22.712 "subsystems": [ 00:08:22.712 { 00:08:22.712 "subsystem": "bdev", 00:08:22.712 "config": [ 00:08:22.712 { 00:08:22.712 "params": { 00:08:22.712 "trtype": "pcie", 00:08:22.712 "traddr": "0000:00:06.0", 00:08:22.712 "name": "Nvme0" 00:08:22.712 }, 00:08:22.712 "method": "bdev_nvme_attach_controller" 00:08:22.712 }, 00:08:22.712 { 00:08:22.712 "method": "bdev_wait_for_examine" 00:08:22.712 } 00:08:22.712 ] 00:08:22.712 } 00:08:22.712 ] 00:08:22.712 } 00:08:22.971 [2024-12-08 05:07:12.596910] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.972 [2024-12-08 05:07:12.635940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.972  [2024-12-08T05:07:13.016Z] Copying: 60/60 [kB] (average 58 MBps) 00:08:23.230 00:08:23.230 05:07:12 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:23.230 05:07:12 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:08:23.230 05:07:12 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:23.230 05:07:12 -- dd/common.sh@11 -- # local nvme_ref= 00:08:23.230 05:07:12 -- dd/common.sh@12 -- # local size=61440 00:08:23.230 05:07:12 -- dd/common.sh@14 -- # local bs=1048576 00:08:23.230 05:07:12 -- dd/common.sh@15 -- # local count=1 00:08:23.230 05:07:12 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:23.230 05:07:12 -- dd/common.sh@18 -- # gen_conf 00:08:23.230 05:07:12 -- dd/common.sh@31 -- # xtrace_disable 00:08:23.230 05:07:12 -- common/autotest_common.sh@10 -- # set +x 00:08:23.230 [2024-12-08 05:07:12.954479] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:23.230 [2024-12-08 05:07:12.954591] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69873 ] 00:08:23.230 { 00:08:23.230 "subsystems": [ 00:08:23.230 { 00:08:23.230 "subsystem": "bdev", 00:08:23.230 "config": [ 00:08:23.230 { 00:08:23.230 "params": { 00:08:23.230 "trtype": "pcie", 00:08:23.230 "traddr": "0000:00:06.0", 00:08:23.230 "name": "Nvme0" 00:08:23.230 }, 00:08:23.230 "method": "bdev_nvme_attach_controller" 00:08:23.230 }, 00:08:23.230 { 00:08:23.230 "method": "bdev_wait_for_examine" 00:08:23.230 } 00:08:23.230 ] 00:08:23.230 } 00:08:23.230 ] 00:08:23.230 } 00:08:23.489 [2024-12-08 05:07:13.095689] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.490 [2024-12-08 05:07:13.140020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.490  [2024-12-08T05:07:13.534Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:23.748 00:08:23.748 05:07:13 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:23.748 05:07:13 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:23.748 05:07:13 -- dd/basic_rw.sh@23 -- # count=7 00:08:23.748 05:07:13 -- dd/basic_rw.sh@24 -- # count=7 00:08:23.748 05:07:13 -- dd/basic_rw.sh@25 -- # size=57344 00:08:23.748 05:07:13 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:08:23.749 05:07:13 -- dd/common.sh@98 -- # xtrace_disable 00:08:23.749 05:07:13 -- common/autotest_common.sh@10 -- # set +x 00:08:24.316 05:07:13 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:08:24.316 05:07:13 -- dd/basic_rw.sh@30 -- # gen_conf 00:08:24.316 05:07:13 -- dd/common.sh@31 -- # xtrace_disable 00:08:24.316 05:07:13 -- common/autotest_common.sh@10 -- # set +x 00:08:24.316 [2024-12-08 05:07:14.036486] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:24.316 [2024-12-08 05:07:14.036580] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69896 ] 00:08:24.316 { 00:08:24.316 "subsystems": [ 00:08:24.316 { 00:08:24.316 "subsystem": "bdev", 00:08:24.316 "config": [ 00:08:24.316 { 00:08:24.316 "params": { 00:08:24.316 "trtype": "pcie", 00:08:24.316 "traddr": "0000:00:06.0", 00:08:24.316 "name": "Nvme0" 00:08:24.316 }, 00:08:24.316 "method": "bdev_nvme_attach_controller" 00:08:24.316 }, 00:08:24.316 { 00:08:24.316 "method": "bdev_wait_for_examine" 00:08:24.316 } 00:08:24.316 ] 00:08:24.316 } 00:08:24.316 ] 00:08:24.316 } 00:08:24.575 [2024-12-08 05:07:14.178077] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.575 [2024-12-08 05:07:14.226125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.575  [2024-12-08T05:07:14.620Z] Copying: 56/56 [kB] (average 27 MBps) 00:08:24.834 00:08:24.834 05:07:14 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:08:24.834 05:07:14 -- dd/basic_rw.sh@37 -- # gen_conf 00:08:24.834 05:07:14 -- dd/common.sh@31 -- # xtrace_disable 00:08:24.834 05:07:14 -- common/autotest_common.sh@10 -- # set +x 00:08:24.834 [2024-12-08 05:07:14.566668] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:24.834 [2024-12-08 05:07:14.566818] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69904 ] 00:08:24.834 { 00:08:24.834 "subsystems": [ 00:08:24.834 { 00:08:24.834 "subsystem": "bdev", 00:08:24.834 "config": [ 00:08:24.834 { 00:08:24.834 "params": { 00:08:24.834 "trtype": "pcie", 00:08:24.834 "traddr": "0000:00:06.0", 00:08:24.834 "name": "Nvme0" 00:08:24.834 }, 00:08:24.834 "method": "bdev_nvme_attach_controller" 00:08:24.834 }, 00:08:24.834 { 00:08:24.834 "method": "bdev_wait_for_examine" 00:08:24.834 } 00:08:24.834 ] 00:08:24.834 } 00:08:24.834 ] 00:08:24.834 } 00:08:25.093 [2024-12-08 05:07:14.699002] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.093 [2024-12-08 05:07:14.738328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.093  [2024-12-08T05:07:15.138Z] Copying: 56/56 [kB] (average 54 MBps) 00:08:25.352 00:08:25.352 05:07:15 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:25.352 05:07:15 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:08:25.352 05:07:15 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:25.352 05:07:15 -- dd/common.sh@11 -- # local nvme_ref= 00:08:25.352 05:07:15 -- dd/common.sh@12 -- # local size=57344 00:08:25.352 05:07:15 -- dd/common.sh@14 -- # local bs=1048576 00:08:25.352 05:07:15 -- dd/common.sh@15 -- # local count=1 00:08:25.352 05:07:15 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:25.352 05:07:15 -- dd/common.sh@18 -- # gen_conf 00:08:25.352 05:07:15 -- dd/common.sh@31 -- # xtrace_disable 00:08:25.352 05:07:15 -- common/autotest_common.sh@10 -- # set +x 00:08:25.352 [2024-12-08 05:07:15.088367] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:25.352 [2024-12-08 05:07:15.088462] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69923 ] 00:08:25.352 { 00:08:25.352 "subsystems": [ 00:08:25.352 { 00:08:25.352 "subsystem": "bdev", 00:08:25.352 "config": [ 00:08:25.352 { 00:08:25.352 "params": { 00:08:25.352 "trtype": "pcie", 00:08:25.352 "traddr": "0000:00:06.0", 00:08:25.352 "name": "Nvme0" 00:08:25.352 }, 00:08:25.352 "method": "bdev_nvme_attach_controller" 00:08:25.352 }, 00:08:25.352 { 00:08:25.352 "method": "bdev_wait_for_examine" 00:08:25.352 } 00:08:25.352 ] 00:08:25.352 } 00:08:25.352 ] 00:08:25.352 } 00:08:25.610 [2024-12-08 05:07:15.227364] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.610 [2024-12-08 05:07:15.269335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.610  [2024-12-08T05:07:15.655Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:25.869 00:08:25.869 05:07:15 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:25.869 05:07:15 -- dd/basic_rw.sh@23 -- # count=7 00:08:25.869 05:07:15 -- dd/basic_rw.sh@24 -- # count=7 00:08:25.869 05:07:15 -- dd/basic_rw.sh@25 -- # size=57344 00:08:25.869 05:07:15 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:08:25.869 05:07:15 -- dd/common.sh@98 -- # xtrace_disable 00:08:25.869 05:07:15 -- common/autotest_common.sh@10 -- # set +x 00:08:26.437 05:07:16 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:08:26.437 05:07:16 -- dd/basic_rw.sh@30 -- # gen_conf 00:08:26.437 05:07:16 -- dd/common.sh@31 -- # xtrace_disable 00:08:26.437 05:07:16 -- common/autotest_common.sh@10 -- # set +x 00:08:26.437 [2024-12-08 05:07:16.210439] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:26.437 [2024-12-08 05:07:16.210530] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69941 ] 00:08:26.437 { 00:08:26.437 "subsystems": [ 00:08:26.437 { 00:08:26.437 "subsystem": "bdev", 00:08:26.437 "config": [ 00:08:26.437 { 00:08:26.437 "params": { 00:08:26.437 "trtype": "pcie", 00:08:26.437 "traddr": "0000:00:06.0", 00:08:26.437 "name": "Nvme0" 00:08:26.437 }, 00:08:26.437 "method": "bdev_nvme_attach_controller" 00:08:26.437 }, 00:08:26.437 { 00:08:26.437 "method": "bdev_wait_for_examine" 00:08:26.437 } 00:08:26.437 ] 00:08:26.437 } 00:08:26.437 ] 00:08:26.437 } 00:08:26.695 [2024-12-08 05:07:16.348237] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.695 [2024-12-08 05:07:16.393962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.954  [2024-12-08T05:07:16.740Z] Copying: 56/56 [kB] (average 54 MBps) 00:08:26.954 00:08:26.954 05:07:16 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:08:26.954 05:07:16 -- dd/basic_rw.sh@37 -- # gen_conf 00:08:26.954 05:07:16 -- dd/common.sh@31 -- # xtrace_disable 00:08:26.954 05:07:16 -- common/autotest_common.sh@10 -- # set +x 00:08:26.954 [2024-12-08 05:07:16.715331] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:26.954 [2024-12-08 05:07:16.715551] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69948 ] 00:08:26.954 { 00:08:26.954 "subsystems": [ 00:08:26.954 { 00:08:26.954 "subsystem": "bdev", 00:08:26.954 "config": [ 00:08:26.954 { 00:08:26.954 "params": { 00:08:26.954 "trtype": "pcie", 00:08:26.954 "traddr": "0000:00:06.0", 00:08:26.954 "name": "Nvme0" 00:08:26.954 }, 00:08:26.954 "method": "bdev_nvme_attach_controller" 00:08:26.954 }, 00:08:26.954 { 00:08:26.954 "method": "bdev_wait_for_examine" 00:08:26.954 } 00:08:26.954 ] 00:08:26.954 } 00:08:26.954 ] 00:08:26.954 } 00:08:27.212 [2024-12-08 05:07:16.848144] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.212 [2024-12-08 05:07:16.886813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.470  [2024-12-08T05:07:17.256Z] Copying: 56/56 [kB] (average 54 MBps) 00:08:27.470 00:08:27.470 05:07:17 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:27.470 05:07:17 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:08:27.470 05:07:17 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:27.470 05:07:17 -- dd/common.sh@11 -- # local nvme_ref= 00:08:27.470 05:07:17 -- dd/common.sh@12 -- # local size=57344 00:08:27.470 05:07:17 -- dd/common.sh@14 -- # local bs=1048576 00:08:27.470 05:07:17 -- dd/common.sh@15 -- # local count=1 00:08:27.470 05:07:17 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:27.470 05:07:17 -- dd/common.sh@18 -- # gen_conf 00:08:27.470 05:07:17 -- dd/common.sh@31 -- # xtrace_disable 00:08:27.470 05:07:17 -- common/autotest_common.sh@10 -- # set +x 00:08:27.470 [2024-12-08 05:07:17.240552] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:27.470 [2024-12-08 05:07:17.240820] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69967 ] 00:08:27.470 { 00:08:27.470 "subsystems": [ 00:08:27.471 { 00:08:27.471 "subsystem": "bdev", 00:08:27.471 "config": [ 00:08:27.471 { 00:08:27.471 "params": { 00:08:27.471 "trtype": "pcie", 00:08:27.471 "traddr": "0000:00:06.0", 00:08:27.471 "name": "Nvme0" 00:08:27.471 }, 00:08:27.471 "method": "bdev_nvme_attach_controller" 00:08:27.471 }, 00:08:27.471 { 00:08:27.471 "method": "bdev_wait_for_examine" 00:08:27.471 } 00:08:27.471 ] 00:08:27.471 } 00:08:27.471 ] 00:08:27.471 } 00:08:27.742 [2024-12-08 05:07:17.378035] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.742 [2024-12-08 05:07:17.420571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.000  [2024-12-08T05:07:17.786Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:28.000 00:08:28.000 05:07:17 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:28.000 05:07:17 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:28.000 05:07:17 -- dd/basic_rw.sh@23 -- # count=3 00:08:28.000 05:07:17 -- dd/basic_rw.sh@24 -- # count=3 00:08:28.000 05:07:17 -- dd/basic_rw.sh@25 -- # size=49152 00:08:28.000 05:07:17 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:08:28.000 05:07:17 -- dd/common.sh@98 -- # xtrace_disable 00:08:28.000 05:07:17 -- common/autotest_common.sh@10 -- # set +x 00:08:28.566 05:07:18 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:08:28.566 05:07:18 -- dd/basic_rw.sh@30 -- # gen_conf 00:08:28.566 05:07:18 -- dd/common.sh@31 -- # xtrace_disable 00:08:28.566 05:07:18 -- common/autotest_common.sh@10 -- # set +x 00:08:28.566 [2024-12-08 05:07:18.250199] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:28.566 [2024-12-08 05:07:18.250439] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69985 ] 00:08:28.566 { 00:08:28.566 "subsystems": [ 00:08:28.566 { 00:08:28.566 "subsystem": "bdev", 00:08:28.566 "config": [ 00:08:28.566 { 00:08:28.566 "params": { 00:08:28.566 "trtype": "pcie", 00:08:28.566 "traddr": "0000:00:06.0", 00:08:28.566 "name": "Nvme0" 00:08:28.566 }, 00:08:28.567 "method": "bdev_nvme_attach_controller" 00:08:28.567 }, 00:08:28.567 { 00:08:28.567 "method": "bdev_wait_for_examine" 00:08:28.567 } 00:08:28.567 ] 00:08:28.567 } 00:08:28.567 ] 00:08:28.567 } 00:08:28.825 [2024-12-08 05:07:18.381287] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.825 [2024-12-08 05:07:18.414057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.825  [2024-12-08T05:07:18.869Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:29.083 00:08:29.083 05:07:18 -- dd/basic_rw.sh@37 -- # gen_conf 00:08:29.083 05:07:18 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:08:29.083 05:07:18 -- dd/common.sh@31 -- # xtrace_disable 00:08:29.083 05:07:18 -- common/autotest_common.sh@10 -- # set +x 00:08:29.083 [2024-12-08 05:07:18.725931] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:29.083 [2024-12-08 05:07:18.726201] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69992 ] 00:08:29.083 { 00:08:29.083 "subsystems": [ 00:08:29.083 { 00:08:29.083 "subsystem": "bdev", 00:08:29.083 "config": [ 00:08:29.083 { 00:08:29.083 "params": { 00:08:29.083 "trtype": "pcie", 00:08:29.083 "traddr": "0000:00:06.0", 00:08:29.083 "name": "Nvme0" 00:08:29.083 }, 00:08:29.083 "method": "bdev_nvme_attach_controller" 00:08:29.083 }, 00:08:29.083 { 00:08:29.083 "method": "bdev_wait_for_examine" 00:08:29.083 } 00:08:29.083 ] 00:08:29.083 } 00:08:29.083 ] 00:08:29.083 } 00:08:29.083 [2024-12-08 05:07:18.859687] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.341 [2024-12-08 05:07:18.893542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.341  [2024-12-08T05:07:19.385Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:29.599 00:08:29.599 05:07:19 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:29.599 05:07:19 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:08:29.599 05:07:19 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:29.599 05:07:19 -- dd/common.sh@11 -- # local nvme_ref= 00:08:29.599 05:07:19 -- dd/common.sh@12 -- # local size=49152 00:08:29.599 05:07:19 -- dd/common.sh@14 -- # local bs=1048576 00:08:29.599 05:07:19 -- dd/common.sh@15 -- # local count=1 00:08:29.599 05:07:19 -- dd/common.sh@18 -- # gen_conf 00:08:29.599 05:07:19 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:29.599 05:07:19 -- dd/common.sh@31 -- # xtrace_disable 00:08:29.599 05:07:19 -- common/autotest_common.sh@10 -- # set +x 00:08:29.599 [2024-12-08 05:07:19.216536] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:29.599 [2024-12-08 05:07:19.216634] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70010 ] 00:08:29.599 { 00:08:29.599 "subsystems": [ 00:08:29.599 { 00:08:29.599 "subsystem": "bdev", 00:08:29.599 "config": [ 00:08:29.599 { 00:08:29.599 "params": { 00:08:29.599 "trtype": "pcie", 00:08:29.599 "traddr": "0000:00:06.0", 00:08:29.599 "name": "Nvme0" 00:08:29.599 }, 00:08:29.599 "method": "bdev_nvme_attach_controller" 00:08:29.599 }, 00:08:29.599 { 00:08:29.599 "method": "bdev_wait_for_examine" 00:08:29.599 } 00:08:29.599 ] 00:08:29.599 } 00:08:29.599 ] 00:08:29.599 } 00:08:29.599 [2024-12-08 05:07:19.355752] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.920 [2024-12-08 05:07:19.390835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.920  [2024-12-08T05:07:19.706Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:29.920 00:08:29.920 05:07:19 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:29.920 05:07:19 -- dd/basic_rw.sh@23 -- # count=3 00:08:29.920 05:07:19 -- dd/basic_rw.sh@24 -- # count=3 00:08:29.920 05:07:19 -- dd/basic_rw.sh@25 -- # size=49152 00:08:29.920 05:07:19 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:08:29.920 05:07:19 -- dd/common.sh@98 -- # xtrace_disable 00:08:29.920 05:07:19 -- common/autotest_common.sh@10 -- # set +x 00:08:30.486 05:07:20 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:08:30.486 05:07:20 -- dd/basic_rw.sh@30 -- # gen_conf 00:08:30.486 05:07:20 -- dd/common.sh@31 -- # xtrace_disable 00:08:30.486 05:07:20 -- common/autotest_common.sh@10 -- # set +x 00:08:30.486 [2024-12-08 05:07:20.172068] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:30.486 [2024-12-08 05:07:20.172359] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70024 ] 00:08:30.486 { 00:08:30.486 "subsystems": [ 00:08:30.486 { 00:08:30.486 "subsystem": "bdev", 00:08:30.486 "config": [ 00:08:30.486 { 00:08:30.486 "params": { 00:08:30.486 "trtype": "pcie", 00:08:30.486 "traddr": "0000:00:06.0", 00:08:30.486 "name": "Nvme0" 00:08:30.486 }, 00:08:30.486 "method": "bdev_nvme_attach_controller" 00:08:30.486 }, 00:08:30.486 { 00:08:30.486 "method": "bdev_wait_for_examine" 00:08:30.486 } 00:08:30.486 ] 00:08:30.486 } 00:08:30.486 ] 00:08:30.486 } 00:08:30.745 [2024-12-08 05:07:20.312172] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.745 [2024-12-08 05:07:20.344936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.745  [2024-12-08T05:07:20.788Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:31.002 00:08:31.002 05:07:20 -- dd/basic_rw.sh@37 -- # gen_conf 00:08:31.002 05:07:20 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:08:31.002 05:07:20 -- dd/common.sh@31 -- # xtrace_disable 00:08:31.002 05:07:20 -- common/autotest_common.sh@10 -- # set +x 00:08:31.002 [2024-12-08 05:07:20.649325] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:31.002 [2024-12-08 05:07:20.649418] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70036 ] 00:08:31.002 { 00:08:31.002 "subsystems": [ 00:08:31.002 { 00:08:31.002 "subsystem": "bdev", 00:08:31.002 "config": [ 00:08:31.002 { 00:08:31.002 "params": { 00:08:31.002 "trtype": "pcie", 00:08:31.002 "traddr": "0000:00:06.0", 00:08:31.002 "name": "Nvme0" 00:08:31.002 }, 00:08:31.002 "method": "bdev_nvme_attach_controller" 00:08:31.002 }, 00:08:31.002 { 00:08:31.002 "method": "bdev_wait_for_examine" 00:08:31.002 } 00:08:31.002 ] 00:08:31.002 } 00:08:31.002 ] 00:08:31.002 } 00:08:31.265 [2024-12-08 05:07:20.789437] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.265 [2024-12-08 05:07:20.822472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.265  [2024-12-08T05:07:21.350Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:31.564 00:08:31.564 05:07:21 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:31.564 05:07:21 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:08:31.564 05:07:21 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:31.564 05:07:21 -- dd/common.sh@11 -- # local nvme_ref= 00:08:31.564 05:07:21 -- dd/common.sh@12 -- # local size=49152 00:08:31.564 05:07:21 -- dd/common.sh@14 -- # local bs=1048576 00:08:31.564 05:07:21 -- dd/common.sh@15 -- # local count=1 00:08:31.564 05:07:21 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:31.564 05:07:21 -- dd/common.sh@18 -- # gen_conf 00:08:31.564 05:07:21 -- dd/common.sh@31 -- # xtrace_disable 00:08:31.564 05:07:21 -- common/autotest_common.sh@10 -- # set +x 00:08:31.564 [2024-12-08 05:07:21.140762] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:31.564 [2024-12-08 05:07:21.140851] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70044 ] 00:08:31.564 { 00:08:31.564 "subsystems": [ 00:08:31.564 { 00:08:31.564 "subsystem": "bdev", 00:08:31.564 "config": [ 00:08:31.564 { 00:08:31.564 "params": { 00:08:31.564 "trtype": "pcie", 00:08:31.564 "traddr": "0000:00:06.0", 00:08:31.564 "name": "Nvme0" 00:08:31.564 }, 00:08:31.564 "method": "bdev_nvme_attach_controller" 00:08:31.564 }, 00:08:31.564 { 00:08:31.564 "method": "bdev_wait_for_examine" 00:08:31.564 } 00:08:31.564 ] 00:08:31.564 } 00:08:31.564 ] 00:08:31.564 } 00:08:31.564 [2024-12-08 05:07:21.276138] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.564 [2024-12-08 05:07:21.309156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.835  [2024-12-08T05:07:21.621Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:31.835 00:08:31.835 ************************************ 00:08:31.835 END TEST dd_rw 00:08:31.835 ************************************ 00:08:31.835 00:08:31.835 real 0m12.568s 00:08:31.835 user 0m9.144s 00:08:31.835 sys 0m2.281s 00:08:31.835 05:07:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:31.835 05:07:21 -- common/autotest_common.sh@10 -- # set +x 00:08:32.093 05:07:21 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:08:32.093 05:07:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:32.093 05:07:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:32.093 05:07:21 -- common/autotest_common.sh@10 -- # set +x 00:08:32.093 ************************************ 00:08:32.093 START TEST dd_rw_offset 00:08:32.093 ************************************ 00:08:32.093 05:07:21 -- common/autotest_common.sh@1114 -- # basic_offset 00:08:32.093 05:07:21 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:08:32.093 05:07:21 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:08:32.093 05:07:21 -- dd/common.sh@98 -- # xtrace_disable 00:08:32.093 05:07:21 -- common/autotest_common.sh@10 -- # set +x 00:08:32.093 05:07:21 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:08:32.094 05:07:21 -- dd/basic_rw.sh@56 -- # data=1q0lk6mecnjl5qgsg3e3fqsgsnv8c29ukndldwkc4jhqfuhcaxainnw3nqoudc32okb6hr2nstx3wjka10292srx5ed4euul44soykmj6jok9qrt2uhh4s0gzu4m53rj3weletkecckdc1st3xkeq3jcww6kqkvsx1k1yhvfptlu2nfkw8vx3bar2gbdasju0kekfl3l7p75tgwwcttj68zue55kdnf7rpgi7j1d4jpwovcf35yiugw39t4qldkjoxx7shkb321c4p4kky7smfudstzdi1ch86i8c33wjfs1opy9fdpqrqnq2vii3vjbzq04mdcodyqiv1ggfgdffauhdy8s78zq80mj4r9jm1rofhr594usjun035jnyo3nmgu1vdm7phj4jjabf8q17hsrg2j0rbqldr8fv0drtvela8zzchvfph0mpv2q9wv2lz06w70mwwnku8x3s4c13fh7rd5tvjwgezfn186pu3570su2nhdkkv1c8i9aot0ieb1hp0alycjmuio7xff46gsqlh1ean952v1xbgw4g91w8sjej1dk3n0tjv75pkyoz1znlhsmyhsp2jmoo900df0b4e3ynquyu2qebgaaauylzcqtld3hsbxy4fiihdc5nuwqtxpyi3apoe1kyu52ex1gkxneac4qdp6l9evd7elmvk1co771pcmq1nnxlh10qdg72ohydx1wn56whnoqrzwyed6ydvy1fy7169hgmcdfhrhkmrblx3nzatt47bain5xhdn5ktw2jl4oo3pmom9ak7kc9e37a0yzknvhdrk4g3m5q03g8xs4th6gl3nhs06oclgwu1fyyk19o2t4o3p7h5tfxowxm4ihsb7clvo4enm0t3nnbxzyyysanzhhwdg5eeumu8reohk5eguk3mu0172zyksgfskl85mzhfadj2wfq6w1w3sze62mqiisikk5mq5ybguthb0nsw3cxj4qspc83puxp8pwkv35vxe2bpmz2snj748g2v8mshhbzqgw92hookpos7ncelof275c90vy1sun2qgovsshgr93nde9xyp09olhy1gfkpuzxxoy60bmmcg6pbu2vjria7xpy52p5ehvcp7qwry1drx8zaxbibus4i9zzi5n18gk3lh3pa6ugim6tyeqntgtygydp3twmihirhw61yql135u3v2sbsfqi5tdy5usyxp5bbwh6qjhszq7hgtgb2sm3tc8ss9ki65lhoew97hddss45hbdwkjgesjigchyyndqdc7lbjsvde192jz59n1vrze6i4h7db9rc0dzqaojtispfqw0xgrbfa46xpoayftq0yz81ajaln1cxo1yv0so3u2xd17gkywlf5xyscq08j4v80q34yiv4wxp3bel7166jwcbu42qzx8c3avhdtasrkb0awln8sbi87cv5p8tnd7bj5uv32wg3m6i53estfphxmmvq3qst9cdqz22ykufsia8xngcnmcavm0xz959knfnebyhxrh4ipqoo38qqpqwgmd5fletgeh7h932zvm7ta4ycnq2tz4ekae98lcmtkl95b75q51ztxmtp0dhpxqnj4ejnc7j4iwx5casuprsa6nuryoknc5g3ptw9rmsspe6nho9ax7o69y05y3mnwztgel3kuptzwutonxi6jewnh84xay9epw24s82rhznnefx6a2te6izfe0yxn8nznri1b1vd7pke9druoa8fdsfnxcwj57bx80gp4swuq61ct5730lmq7it3s3gocr8xk3p7x4poxsrlrvzs5amjzv9g6h5tefsbqe731rp0h1bu9jr60r4zjrswkfruoz3iw8xfm25jjuydf8jtq4dcvfrxk56ijy5tsv3alzjj7gkzw5w6thcvmrisw62ryszsfhuazpfzwlimjk7ywex0czgepa2wzs78idls91s3vrvd7hi59esiafdugqdrd2bmr3tgxlb8yaakuno122jsru7vb7l6qhohadypm4eddxun6vn5bc91th24uoh8gcrv4ttva7adnr1b8j9t5wqf34kq5aijg4mv6wgdvkl1d7qlycvwyaq3dc92sojxsnlpk1egjefybdtaamk3ia89hbf8edggm5pd906mtc3f2ithijg89mar3eukhd30wuz5hx0n2bpxwhkncs1cw75im3tbuacaav167pn35vf4vd1b8jvex57ytjbco0vycxad0na0konf78ch9ccb4nz12bjtx8almrrn2hbfwlnil7uiepnu0qyqxnc1di2ogvjc5604vs68n5u6k6v3ugx909h2mnzye16nezjzeoswymjrsnffkykozxbpkqmipk8uaxn5a5eranqac78678ldwm2vw7ounofqnx25locetjcvzm9yan5hp2xzur93bl25i9wr7cf1exlg7262hilbwxswh41qftpn282folj021s0m8klmfvvnb4bladqoofwta2xtru6fa4lu4ccr6lv0q35kr4namcuje71ewasf900b204t419vb9u2ms57p3vqmuwdjry91rx4eys4qj40d0ti5u0v1gn9xfciph36jxvn3h4p8tw2zywu58t2wtd2s0fr6qixmn782h5tr4q76fc9qiw08ot3tdh2geixianm8etr3cxuh1ppi4nfrdt9rhqm4x5w29ac5wu49doe8d0c7ro7iq87tu9nicm8tgcp5pvvqlohjrv6idqh6d9ral5p0gz70dj0r1szb9qaio56q898dd3dsdafy1us1cy05cbmx1d1ifx4y28lcrx6zh3r5clqwyl407vwfzo9g6kt8p8xiu9zx4lbbfwqred34jsqd6xaklwmrt08lapmkz7va2h18mivoe9kzsnm8ymhaqrqcmibt3dj2ds44nfz42usz9fq8ez8srq272cfgkenk5pkh1752orzam4utprxvrtcirll5wmezqylo6erbcscg2rftlxvbmv2j402iusewnpp773uw0ltn1dfjl44qpr99ld5kf1whb288fwgv92gvvp00eihi2oudtxzv5a084vzc13xojk9kypwerb3em64g4yn7pb6kwkbwoeczqdrry9swmz4x347fb2a54gsno95l663heaw9xjlv2lla801u63vmnp16o14eddx3gjiqkodzkaseysou9abcvqh872pkzxenye1cxjpqlx0uihmysyathavcjri73638cq5k9vhgnbf9kdtr80xa2m1cncowech4w3aibqcm8tcmaco4poo40qgku27kspyl95poy5aarar4j20hvdtf4dhs2uyherqrlrj8texk2qh1tb4tahazt0dw5e64iiz48ec55juv1k52z19rpxt1pyq3n76tbsdvjwfbhg192n0dzg2qhfe659gyw1t8153xi26smq0mp1b63qvv1hf0japam3jk435667zj7jxiij13zqg97m84eu5jpj8itw417rmbuwisif754iqqbq5z0eoy2tj8ppeqsyvmrofg1oo77pobekluv59qelcnw9ycckratgigur763ckpey4t6mdul4ob1rvksmichg6s989dc06tfayi7z7v56o9xime9uf48ocquhi9b0lqytpwxo7r3zofdui04kmup2qt9wks67zgtq78capdn1ilqcte0nie9bhq4452ubza99vsdtrl3hgn164cnmmjtg9f9w3od27r70rbtezmjzf7kklqrl1noy5q0czud5iwroll27aze5oairv6q3ivb0g602ruqc20lygje6aa1tkjz5xpkxippr1eer85shc0h6rpb7kdtmg3qla072sxb17iy02bdx6hh8d9mvp26m3exfvz2r392jsaif52xvcs006nzdqmjedg52ekvwngibqezi50xw08sz8dc2iprhvcp9dnpiujk71ea0r99wpynk8y1ov0kyim7olpkcjlwsdufh6u89ssxkju8uj65vy5x36mirx1yjh7c26tf2g0rc5bmmhcmdcgk2jb03przunarnob2wkcq6lwhrhefq27hop6w7tphsypf4xy41p14y1777 00:08:32.094 05:07:21 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:08:32.094 05:07:21 -- dd/basic_rw.sh@59 -- # gen_conf 00:08:32.094 05:07:21 -- dd/common.sh@31 -- # xtrace_disable 00:08:32.094 05:07:21 -- common/autotest_common.sh@10 -- # set +x 00:08:32.094 { 00:08:32.094 "subsystems": [ 00:08:32.094 { 00:08:32.094 "subsystem": "bdev", 00:08:32.094 "config": [ 00:08:32.094 { 00:08:32.094 "params": { 00:08:32.094 "trtype": "pcie", 00:08:32.094 "traddr": "0000:00:06.0", 00:08:32.094 "name": "Nvme0" 00:08:32.094 }, 00:08:32.094 "method": "bdev_nvme_attach_controller" 00:08:32.094 }, 00:08:32.094 { 00:08:32.094 "method": "bdev_wait_for_examine" 00:08:32.094 } 00:08:32.094 ] 00:08:32.094 } 00:08:32.094 ] 00:08:32.094 } 00:08:32.094 [2024-12-08 05:07:21.750366] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:32.094 [2024-12-08 05:07:21.750632] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70079 ] 00:08:32.353 [2024-12-08 05:07:21.888816] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.353 [2024-12-08 05:07:21.927273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.353  [2024-12-08T05:07:22.398Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:08:32.612 00:08:32.612 05:07:22 -- dd/basic_rw.sh@65 -- # gen_conf 00:08:32.612 05:07:22 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:08:32.612 05:07:22 -- dd/common.sh@31 -- # xtrace_disable 00:08:32.612 05:07:22 -- common/autotest_common.sh@10 -- # set +x 00:08:32.612 [2024-12-08 05:07:22.237283] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:32.612 [2024-12-08 05:07:22.237568] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70092 ] 00:08:32.612 { 00:08:32.612 "subsystems": [ 00:08:32.612 { 00:08:32.612 "subsystem": "bdev", 00:08:32.612 "config": [ 00:08:32.612 { 00:08:32.612 "params": { 00:08:32.612 "trtype": "pcie", 00:08:32.612 "traddr": "0000:00:06.0", 00:08:32.612 "name": "Nvme0" 00:08:32.612 }, 00:08:32.612 "method": "bdev_nvme_attach_controller" 00:08:32.612 }, 00:08:32.612 { 00:08:32.612 "method": "bdev_wait_for_examine" 00:08:32.612 } 00:08:32.612 ] 00:08:32.612 } 00:08:32.612 ] 00:08:32.612 } 00:08:32.612 [2024-12-08 05:07:22.374183] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.871 [2024-12-08 05:07:22.407598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.871  [2024-12-08T05:07:22.918Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:08:33.132 00:08:33.132 05:07:22 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:08:33.132 ************************************ 00:08:33.132 END TEST dd_rw_offset 00:08:33.132 05:07:22 -- dd/basic_rw.sh@72 -- # [[ 1q0lk6mecnjl5qgsg3e3fqsgsnv8c29ukndldwkc4jhqfuhcaxainnw3nqoudc32okb6hr2nstx3wjka10292srx5ed4euul44soykmj6jok9qrt2uhh4s0gzu4m53rj3weletkecckdc1st3xkeq3jcww6kqkvsx1k1yhvfptlu2nfkw8vx3bar2gbdasju0kekfl3l7p75tgwwcttj68zue55kdnf7rpgi7j1d4jpwovcf35yiugw39t4qldkjoxx7shkb321c4p4kky7smfudstzdi1ch86i8c33wjfs1opy9fdpqrqnq2vii3vjbzq04mdcodyqiv1ggfgdffauhdy8s78zq80mj4r9jm1rofhr594usjun035jnyo3nmgu1vdm7phj4jjabf8q17hsrg2j0rbqldr8fv0drtvela8zzchvfph0mpv2q9wv2lz06w70mwwnku8x3s4c13fh7rd5tvjwgezfn186pu3570su2nhdkkv1c8i9aot0ieb1hp0alycjmuio7xff46gsqlh1ean952v1xbgw4g91w8sjej1dk3n0tjv75pkyoz1znlhsmyhsp2jmoo900df0b4e3ynquyu2qebgaaauylzcqtld3hsbxy4fiihdc5nuwqtxpyi3apoe1kyu52ex1gkxneac4qdp6l9evd7elmvk1co771pcmq1nnxlh10qdg72ohydx1wn56whnoqrzwyed6ydvy1fy7169hgmcdfhrhkmrblx3nzatt47bain5xhdn5ktw2jl4oo3pmom9ak7kc9e37a0yzknvhdrk4g3m5q03g8xs4th6gl3nhs06oclgwu1fyyk19o2t4o3p7h5tfxowxm4ihsb7clvo4enm0t3nnbxzyyysanzhhwdg5eeumu8reohk5eguk3mu0172zyksgfskl85mzhfadj2wfq6w1w3sze62mqiisikk5mq5ybguthb0nsw3cxj4qspc83puxp8pwkv35vxe2bpmz2snj748g2v8mshhbzqgw92hookpos7ncelof275c90vy1sun2qgovsshgr93nde9xyp09olhy1gfkpuzxxoy60bmmcg6pbu2vjria7xpy52p5ehvcp7qwry1drx8zaxbibus4i9zzi5n18gk3lh3pa6ugim6tyeqntgtygydp3twmihirhw61yql135u3v2sbsfqi5tdy5usyxp5bbwh6qjhszq7hgtgb2sm3tc8ss9ki65lhoew97hddss45hbdwkjgesjigchyyndqdc7lbjsvde192jz59n1vrze6i4h7db9rc0dzqaojtispfqw0xgrbfa46xpoayftq0yz81ajaln1cxo1yv0so3u2xd17gkywlf5xyscq08j4v80q34yiv4wxp3bel7166jwcbu42qzx8c3avhdtasrkb0awln8sbi87cv5p8tnd7bj5uv32wg3m6i53estfphxmmvq3qst9cdqz22ykufsia8xngcnmcavm0xz959knfnebyhxrh4ipqoo38qqpqwgmd5fletgeh7h932zvm7ta4ycnq2tz4ekae98lcmtkl95b75q51ztxmtp0dhpxqnj4ejnc7j4iwx5casuprsa6nuryoknc5g3ptw9rmsspe6nho9ax7o69y05y3mnwztgel3kuptzwutonxi6jewnh84xay9epw24s82rhznnefx6a2te6izfe0yxn8nznri1b1vd7pke9druoa8fdsfnxcwj57bx80gp4swuq61ct5730lmq7it3s3gocr8xk3p7x4poxsrlrvzs5amjzv9g6h5tefsbqe731rp0h1bu9jr60r4zjrswkfruoz3iw8xfm25jjuydf8jtq4dcvfrxk56ijy5tsv3alzjj7gkzw5w6thcvmrisw62ryszsfhuazpfzwlimjk7ywex0czgepa2wzs78idls91s3vrvd7hi59esiafdugqdrd2bmr3tgxlb8yaakuno122jsru7vb7l6qhohadypm4eddxun6vn5bc91th24uoh8gcrv4ttva7adnr1b8j9t5wqf34kq5aijg4mv6wgdvkl1d7qlycvwyaq3dc92sojxsnlpk1egjefybdtaamk3ia89hbf8edggm5pd906mtc3f2ithijg89mar3eukhd30wuz5hx0n2bpxwhkncs1cw75im3tbuacaav167pn35vf4vd1b8jvex57ytjbco0vycxad0na0konf78ch9ccb4nz12bjtx8almrrn2hbfwlnil7uiepnu0qyqxnc1di2ogvjc5604vs68n5u6k6v3ugx909h2mnzye16nezjzeoswymjrsnffkykozxbpkqmipk8uaxn5a5eranqac78678ldwm2vw7ounofqnx25locetjcvzm9yan5hp2xzur93bl25i9wr7cf1exlg7262hilbwxswh41qftpn282folj021s0m8klmfvvnb4bladqoofwta2xtru6fa4lu4ccr6lv0q35kr4namcuje71ewasf900b204t419vb9u2ms57p3vqmuwdjry91rx4eys4qj40d0ti5u0v1gn9xfciph36jxvn3h4p8tw2zywu58t2wtd2s0fr6qixmn782h5tr4q76fc9qiw08ot3tdh2geixianm8etr3cxuh1ppi4nfrdt9rhqm4x5w29ac5wu49doe8d0c7ro7iq87tu9nicm8tgcp5pvvqlohjrv6idqh6d9ral5p0gz70dj0r1szb9qaio56q898dd3dsdafy1us1cy05cbmx1d1ifx4y28lcrx6zh3r5clqwyl407vwfzo9g6kt8p8xiu9zx4lbbfwqred34jsqd6xaklwmrt08lapmkz7va2h18mivoe9kzsnm8ymhaqrqcmibt3dj2ds44nfz42usz9fq8ez8srq272cfgkenk5pkh1752orzam4utprxvrtcirll5wmezqylo6erbcscg2rftlxvbmv2j402iusewnpp773uw0ltn1dfjl44qpr99ld5kf1whb288fwgv92gvvp00eihi2oudtxzv5a084vzc13xojk9kypwerb3em64g4yn7pb6kwkbwoeczqdrry9swmz4x347fb2a54gsno95l663heaw9xjlv2lla801u63vmnp16o14eddx3gjiqkodzkaseysou9abcvqh872pkzxenye1cxjpqlx0uihmysyathavcjri73638cq5k9vhgnbf9kdtr80xa2m1cncowech4w3aibqcm8tcmaco4poo40qgku27kspyl95poy5aarar4j20hvdtf4dhs2uyherqrlrj8texk2qh1tb4tahazt0dw5e64iiz48ec55juv1k52z19rpxt1pyq3n76tbsdvjwfbhg192n0dzg2qhfe659gyw1t8153xi26smq0mp1b63qvv1hf0japam3jk435667zj7jxiij13zqg97m84eu5jpj8itw417rmbuwisif754iqqbq5z0eoy2tj8ppeqsyvmrofg1oo77pobekluv59qelcnw9ycckratgigur763ckpey4t6mdul4ob1rvksmichg6s989dc06tfayi7z7v56o9xime9uf48ocquhi9b0lqytpwxo7r3zofdui04kmup2qt9wks67zgtq78capdn1ilqcte0nie9bhq4452ubza99vsdtrl3hgn164cnmmjtg9f9w3od27r70rbtezmjzf7kklqrl1noy5q0czud5iwroll27aze5oairv6q3ivb0g602ruqc20lygje6aa1tkjz5xpkxippr1eer85shc0h6rpb7kdtmg3qla072sxb17iy02bdx6hh8d9mvp26m3exfvz2r392jsaif52xvcs006nzdqmjedg52ekvwngibqezi50xw08sz8dc2iprhvcp9dnpiujk71ea0r99wpynk8y1ov0kyim7olpkcjlwsdufh6u89ssxkju8uj65vy5x36mirx1yjh7c26tf2g0rc5bmmhcmdcgk2jb03przunarnob2wkcq6lwhrhefq27hop6w7tphsypf4xy41p14y1777 == \1\q\0\l\k\6\m\e\c\n\j\l\5\q\g\s\g\3\e\3\f\q\s\g\s\n\v\8\c\2\9\u\k\n\d\l\d\w\k\c\4\j\h\q\f\u\h\c\a\x\a\i\n\n\w\3\n\q\o\u\d\c\3\2\o\k\b\6\h\r\2\n\s\t\x\3\w\j\k\a\1\0\2\9\2\s\r\x\5\e\d\4\e\u\u\l\4\4\s\o\y\k\m\j\6\j\o\k\9\q\r\t\2\u\h\h\4\s\0\g\z\u\4\m\5\3\r\j\3\w\e\l\e\t\k\e\c\c\k\d\c\1\s\t\3\x\k\e\q\3\j\c\w\w\6\k\q\k\v\s\x\1\k\1\y\h\v\f\p\t\l\u\2\n\f\k\w\8\v\x\3\b\a\r\2\g\b\d\a\s\j\u\0\k\e\k\f\l\3\l\7\p\7\5\t\g\w\w\c\t\t\j\6\8\z\u\e\5\5\k\d\n\f\7\r\p\g\i\7\j\1\d\4\j\p\w\o\v\c\f\3\5\y\i\u\g\w\3\9\t\4\q\l\d\k\j\o\x\x\7\s\h\k\b\3\2\1\c\4\p\4\k\k\y\7\s\m\f\u\d\s\t\z\d\i\1\c\h\8\6\i\8\c\3\3\w\j\f\s\1\o\p\y\9\f\d\p\q\r\q\n\q\2\v\i\i\3\v\j\b\z\q\0\4\m\d\c\o\d\y\q\i\v\1\g\g\f\g\d\f\f\a\u\h\d\y\8\s\7\8\z\q\8\0\m\j\4\r\9\j\m\1\r\o\f\h\r\5\9\4\u\s\j\u\n\0\3\5\j\n\y\o\3\n\m\g\u\1\v\d\m\7\p\h\j\4\j\j\a\b\f\8\q\1\7\h\s\r\g\2\j\0\r\b\q\l\d\r\8\f\v\0\d\r\t\v\e\l\a\8\z\z\c\h\v\f\p\h\0\m\p\v\2\q\9\w\v\2\l\z\0\6\w\7\0\m\w\w\n\k\u\8\x\3\s\4\c\1\3\f\h\7\r\d\5\t\v\j\w\g\e\z\f\n\1\8\6\p\u\3\5\7\0\s\u\2\n\h\d\k\k\v\1\c\8\i\9\a\o\t\0\i\e\b\1\h\p\0\a\l\y\c\j\m\u\i\o\7\x\f\f\4\6\g\s\q\l\h\1\e\a\n\9\5\2\v\1\x\b\g\w\4\g\9\1\w\8\s\j\e\j\1\d\k\3\n\0\t\j\v\7\5\p\k\y\o\z\1\z\n\l\h\s\m\y\h\s\p\2\j\m\o\o\9\0\0\d\f\0\b\4\e\3\y\n\q\u\y\u\2\q\e\b\g\a\a\a\u\y\l\z\c\q\t\l\d\3\h\s\b\x\y\4\f\i\i\h\d\c\5\n\u\w\q\t\x\p\y\i\3\a\p\o\e\1\k\y\u\5\2\e\x\1\g\k\x\n\e\a\c\4\q\d\p\6\l\9\e\v\d\7\e\l\m\v\k\1\c\o\7\7\1\p\c\m\q\1\n\n\x\l\h\1\0\q\d\g\7\2\o\h\y\d\x\1\w\n\5\6\w\h\n\o\q\r\z\w\y\e\d\6\y\d\v\y\1\f\y\7\1\6\9\h\g\m\c\d\f\h\r\h\k\m\r\b\l\x\3\n\z\a\t\t\4\7\b\a\i\n\5\x\h\d\n\5\k\t\w\2\j\l\4\o\o\3\p\m\o\m\9\a\k\7\k\c\9\e\3\7\a\0\y\z\k\n\v\h\d\r\k\4\g\3\m\5\q\0\3\g\8\x\s\4\t\h\6\g\l\3\n\h\s\0\6\o\c\l\g\w\u\1\f\y\y\k\1\9\o\2\t\4\o\3\p\7\h\5\t\f\x\o\w\x\m\4\i\h\s\b\7\c\l\v\o\4\e\n\m\0\t\3\n\n\b\x\z\y\y\y\s\a\n\z\h\h\w\d\g\5\e\e\u\m\u\8\r\e\o\h\k\5\e\g\u\k\3\m\u\0\1\7\2\z\y\k\s\g\f\s\k\l\8\5\m\z\h\f\a\d\j\2\w\f\q\6\w\1\w\3\s\z\e\6\2\m\q\i\i\s\i\k\k\5\m\q\5\y\b\g\u\t\h\b\0\n\s\w\3\c\x\j\4\q\s\p\c\8\3\p\u\x\p\8\p\w\k\v\3\5\v\x\e\2\b\p\m\z\2\s\n\j\7\4\8\g\2\v\8\m\s\h\h\b\z\q\g\w\9\2\h\o\o\k\p\o\s\7\n\c\e\l\o\f\2\7\5\c\9\0\v\y\1\s\u\n\2\q\g\o\v\s\s\h\g\r\9\3\n\d\e\9\x\y\p\0\9\o\l\h\y\1\g\f\k\p\u\z\x\x\o\y\6\0\b\m\m\c\g\6\p\b\u\2\v\j\r\i\a\7\x\p\y\5\2\p\5\e\h\v\c\p\7\q\w\r\y\1\d\r\x\8\z\a\x\b\i\b\u\s\4\i\9\z\z\i\5\n\1\8\g\k\3\l\h\3\p\a\6\u\g\i\m\6\t\y\e\q\n\t\g\t\y\g\y\d\p\3\t\w\m\i\h\i\r\h\w\6\1\y\q\l\1\3\5\u\3\v\2\s\b\s\f\q\i\5\t\d\y\5\u\s\y\x\p\5\b\b\w\h\6\q\j\h\s\z\q\7\h\g\t\g\b\2\s\m\3\t\c\8\s\s\9\k\i\6\5\l\h\o\e\w\9\7\h\d\d\s\s\4\5\h\b\d\w\k\j\g\e\s\j\i\g\c\h\y\y\n\d\q\d\c\7\l\b\j\s\v\d\e\1\9\2\j\z\5\9\n\1\v\r\z\e\6\i\4\h\7\d\b\9\r\c\0\d\z\q\a\o\j\t\i\s\p\f\q\w\0\x\g\r\b\f\a\4\6\x\p\o\a\y\f\t\q\0\y\z\8\1\a\j\a\l\n\1\c\x\o\1\y\v\0\s\o\3\u\2\x\d\1\7\g\k\y\w\l\f\5\x\y\s\c\q\0\8\j\4\v\8\0\q\3\4\y\i\v\4\w\x\p\3\b\e\l\7\1\6\6\j\w\c\b\u\4\2\q\z\x\8\c\3\a\v\h\d\t\a\s\r\k\b\0\a\w\l\n\8\s\b\i\8\7\c\v\5\p\8\t\n\d\7\b\j\5\u\v\3\2\w\g\3\m\6\i\5\3\e\s\t\f\p\h\x\m\m\v\q\3\q\s\t\9\c\d\q\z\2\2\y\k\u\f\s\i\a\8\x\n\g\c\n\m\c\a\v\m\0\x\z\9\5\9\k\n\f\n\e\b\y\h\x\r\h\4\i\p\q\o\o\3\8\q\q\p\q\w\g\m\d\5\f\l\e\t\g\e\h\7\h\9\3\2\z\v\m\7\t\a\4\y\c\n\q\2\t\z\4\e\k\a\e\9\8\l\c\m\t\k\l\9\5\b\7\5\q\5\1\z\t\x\m\t\p\0\d\h\p\x\q\n\j\4\e\j\n\c\7\j\4\i\w\x\5\c\a\s\u\p\r\s\a\6\n\u\r\y\o\k\n\c\5\g\3\p\t\w\9\r\m\s\s\p\e\6\n\h\o\9\a\x\7\o\6\9\y\0\5\y\3\m\n\w\z\t\g\e\l\3\k\u\p\t\z\w\u\t\o\n\x\i\6\j\e\w\n\h\8\4\x\a\y\9\e\p\w\2\4\s\8\2\r\h\z\n\n\e\f\x\6\a\2\t\e\6\i\z\f\e\0\y\x\n\8\n\z\n\r\i\1\b\1\v\d\7\p\k\e\9\d\r\u\o\a\8\f\d\s\f\n\x\c\w\j\5\7\b\x\8\0\g\p\4\s\w\u\q\6\1\c\t\5\7\3\0\l\m\q\7\i\t\3\s\3\g\o\c\r\8\x\k\3\p\7\x\4\p\o\x\s\r\l\r\v\z\s\5\a\m\j\z\v\9\g\6\h\5\t\e\f\s\b\q\e\7\3\1\r\p\0\h\1\b\u\9\j\r\6\0\r\4\z\j\r\s\w\k\f\r\u\o\z\3\i\w\8\x\f\m\2\5\j\j\u\y\d\f\8\j\t\q\4\d\c\v\f\r\x\k\5\6\i\j\y\5\t\s\v\3\a\l\z\j\j\7\g\k\z\w\5\w\6\t\h\c\v\m\r\i\s\w\6\2\r\y\s\z\s\f\h\u\a\z\p\f\z\w\l\i\m\j\k\7\y\w\e\x\0\c\z\g\e\p\a\2\w\z\s\7\8\i\d\l\s\9\1\s\3\v\r\v\d\7\h\i\5\9\e\s\i\a\f\d\u\g\q\d\r\d\2\b\m\r\3\t\g\x\l\b\8\y\a\a\k\u\n\o\1\2\2\j\s\r\u\7\v\b\7\l\6\q\h\o\h\a\d\y\p\m\4\e\d\d\x\u\n\6\v\n\5\b\c\9\1\t\h\2\4\u\o\h\8\g\c\r\v\4\t\t\v\a\7\a\d\n\r\1\b\8\j\9\t\5\w\q\f\3\4\k\q\5\a\i\j\g\4\m\v\6\w\g\d\v\k\l\1\d\7\q\l\y\c\v\w\y\a\q\3\d\c\9\2\s\o\j\x\s\n\l\p\k\1\e\g\j\e\f\y\b\d\t\a\a\m\k\3\i\a\8\9\h\b\f\8\e\d\g\g\m\5\p\d\9\0\6\m\t\c\3\f\2\i\t\h\i\j\g\8\9\m\a\r\3\e\u\k\h\d\3\0\w\u\z\5\h\x\0\n\2\b\p\x\w\h\k\n\c\s\1\c\w\7\5\i\m\3\t\b\u\a\c\a\a\v\1\6\7\p\n\3\5\v\f\4\v\d\1\b\8\j\v\e\x\5\7\y\t\j\b\c\o\0\v\y\c\x\a\d\0\n\a\0\k\o\n\f\7\8\c\h\9\c\c\b\4\n\z\1\2\b\j\t\x\8\a\l\m\r\r\n\2\h\b\f\w\l\n\i\l\7\u\i\e\p\n\u\0\q\y\q\x\n\c\1\d\i\2\o\g\v\j\c\5\6\0\4\v\s\6\8\n\5\u\6\k\6\v\3\u\g\x\9\0\9\h\2\m\n\z\y\e\1\6\n\e\z\j\z\e\o\s\w\y\m\j\r\s\n\f\f\k\y\k\o\z\x\b\p\k\q\m\i\p\k\8\u\a\x\n\5\a\5\e\r\a\n\q\a\c\7\8\6\7\8\l\d\w\m\2\v\w\7\o\u\n\o\f\q\n\x\2\5\l\o\c\e\t\j\c\v\z\m\9\y\a\n\5\h\p\2\x\z\u\r\9\3\b\l\2\5\i\9\w\r\7\c\f\1\e\x\l\g\7\2\6\2\h\i\l\b\w\x\s\w\h\4\1\q\f\t\p\n\2\8\2\f\o\l\j\0\2\1\s\0\m\8\k\l\m\f\v\v\n\b\4\b\l\a\d\q\o\o\f\w\t\a\2\x\t\r\u\6\f\a\4\l\u\4\c\c\r\6\l\v\0\q\3\5\k\r\4\n\a\m\c\u\j\e\7\1\e\w\a\s\f\9\0\0\b\2\0\4\t\4\1\9\v\b\9\u\2\m\s\5\7\p\3\v\q\m\u\w\d\j\r\y\9\1\r\x\4\e\y\s\4\q\j\4\0\d\0\t\i\5\u\0\v\1\g\n\9\x\f\c\i\p\h\3\6\j\x\v\n\3\h\4\p\8\t\w\2\z\y\w\u\5\8\t\2\w\t\d\2\s\0\f\r\6\q\i\x\m\n\7\8\2\h\5\t\r\4\q\7\6\f\c\9\q\i\w\0\8\o\t\3\t\d\h\2\g\e\i\x\i\a\n\m\8\e\t\r\3\c\x\u\h\1\p\p\i\4\n\f\r\d\t\9\r\h\q\m\4\x\5\w\2\9\a\c\5\w\u\4\9\d\o\e\8\d\0\c\7\r\o\7\i\q\8\7\t\u\9\n\i\c\m\8\t\g\c\p\5\p\v\v\q\l\o\h\j\r\v\6\i\d\q\h\6\d\9\r\a\l\5\p\0\g\z\7\0\d\j\0\r\1\s\z\b\9\q\a\i\o\5\6\q\8\9\8\d\d\3\d\s\d\a\f\y\1\u\s\1\c\y\0\5\c\b\m\x\1\d\1\i\f\x\4\y\2\8\l\c\r\x\6\z\h\3\r\5\c\l\q\w\y\l\4\0\7\v\w\f\z\o\9\g\6\k\t\8\p\8\x\i\u\9\z\x\4\l\b\b\f\w\q\r\e\d\3\4\j\s\q\d\6\x\a\k\l\w\m\r\t\0\8\l\a\p\m\k\z\7\v\a\2\h\1\8\m\i\v\o\e\9\k\z\s\n\m\8\y\m\h\a\q\r\q\c\m\i\b\t\3\d\j\2\d\s\4\4\n\f\z\4\2\u\s\z\9\f\q\8\e\z\8\s\r\q\2\7\2\c\f\g\k\e\n\k\5\p\k\h\1\7\5\2\o\r\z\a\m\4\u\t\p\r\x\v\r\t\c\i\r\l\l\5\w\m\e\z\q\y\l\o\6\e\r\b\c\s\c\g\2\r\f\t\l\x\v\b\m\v\2\j\4\0\2\i\u\s\e\w\n\p\p\7\7\3\u\w\0\l\t\n\1\d\f\j\l\4\4\q\p\r\9\9\l\d\5\k\f\1\w\h\b\2\8\8\f\w\g\v\9\2\g\v\v\p\0\0\e\i\h\i\2\o\u\d\t\x\z\v\5\a\0\8\4\v\z\c\1\3\x\o\j\k\9\k\y\p\w\e\r\b\3\e\m\6\4\g\4\y\n\7\p\b\6\k\w\k\b\w\o\e\c\z\q\d\r\r\y\9\s\w\m\z\4\x\3\4\7\f\b\2\a\5\4\g\s\n\o\9\5\l\6\6\3\h\e\a\w\9\x\j\l\v\2\l\l\a\8\0\1\u\6\3\v\m\n\p\1\6\o\1\4\e\d\d\x\3\g\j\i\q\k\o\d\z\k\a\s\e\y\s\o\u\9\a\b\c\v\q\h\8\7\2\p\k\z\x\e\n\y\e\1\c\x\j\p\q\l\x\0\u\i\h\m\y\s\y\a\t\h\a\v\c\j\r\i\7\3\6\3\8\c\q\5\k\9\v\h\g\n\b\f\9\k\d\t\r\8\0\x\a\2\m\1\c\n\c\o\w\e\c\h\4\w\3\a\i\b\q\c\m\8\t\c\m\a\c\o\4\p\o\o\4\0\q\g\k\u\2\7\k\s\p\y\l\9\5\p\o\y\5\a\a\r\a\r\4\j\2\0\h\v\d\t\f\4\d\h\s\2\u\y\h\e\r\q\r\l\r\j\8\t\e\x\k\2\q\h\1\t\b\4\t\a\h\a\z\t\0\d\w\5\e\6\4\i\i\z\4\8\e\c\5\5\j\u\v\1\k\5\2\z\1\9\r\p\x\t\1\p\y\q\3\n\7\6\t\b\s\d\v\j\w\f\b\h\g\1\9\2\n\0\d\z\g\2\q\h\f\e\6\5\9\g\y\w\1\t\8\1\5\3\x\i\2\6\s\m\q\0\m\p\1\b\6\3\q\v\v\1\h\f\0\j\a\p\a\m\3\j\k\4\3\5\6\6\7\z\j\7\j\x\i\i\j\1\3\z\q\g\9\7\m\8\4\e\u\5\j\p\j\8\i\t\w\4\1\7\r\m\b\u\w\i\s\i\f\7\5\4\i\q\q\b\q\5\z\0\e\o\y\2\t\j\8\p\p\e\q\s\y\v\m\r\o\f\g\1\o\o\7\7\p\o\b\e\k\l\u\v\5\9\q\e\l\c\n\w\9\y\c\c\k\r\a\t\g\i\g\u\r\7\6\3\c\k\p\e\y\4\t\6\m\d\u\l\4\o\b\1\r\v\k\s\m\i\c\h\g\6\s\9\8\9\d\c\0\6\t\f\a\y\i\7\z\7\v\5\6\o\9\x\i\m\e\9\u\f\4\8\o\c\q\u\h\i\9\b\0\l\q\y\t\p\w\x\o\7\r\3\z\o\f\d\u\i\0\4\k\m\u\p\2\q\t\9\w\k\s\6\7\z\g\t\q\7\8\c\a\p\d\n\1\i\l\q\c\t\e\0\n\i\e\9\b\h\q\4\4\5\2\u\b\z\a\9\9\v\s\d\t\r\l\3\h\g\n\1\6\4\c\n\m\m\j\t\g\9\f\9\w\3\o\d\2\7\r\7\0\r\b\t\e\z\m\j\z\f\7\k\k\l\q\r\l\1\n\o\y\5\q\0\c\z\u\d\5\i\w\r\o\l\l\2\7\a\z\e\5\o\a\i\r\v\6\q\3\i\v\b\0\g\6\0\2\r\u\q\c\2\0\l\y\g\j\e\6\a\a\1\t\k\j\z\5\x\p\k\x\i\p\p\r\1\e\e\r\8\5\s\h\c\0\h\6\r\p\b\7\k\d\t\m\g\3\q\l\a\0\7\2\s\x\b\1\7\i\y\0\2\b\d\x\6\h\h\8\d\9\m\v\p\2\6\m\3\e\x\f\v\z\2\r\3\9\2\j\s\a\i\f\5\2\x\v\c\s\0\0\6\n\z\d\q\m\j\e\d\g\5\2\e\k\v\w\n\g\i\b\q\e\z\i\5\0\x\w\0\8\s\z\8\d\c\2\i\p\r\h\v\c\p\9\d\n\p\i\u\j\k\7\1\e\a\0\r\9\9\w\p\y\n\k\8\y\1\o\v\0\k\y\i\m\7\o\l\p\k\c\j\l\w\s\d\u\f\h\6\u\8\9\s\s\x\k\j\u\8\u\j\6\5\v\y\5\x\3\6\m\i\r\x\1\y\j\h\7\c\2\6\t\f\2\g\0\r\c\5\b\m\m\h\c\m\d\c\g\k\2\j\b\0\3\p\r\z\u\n\a\r\n\o\b\2\w\k\c\q\6\l\w\h\r\h\e\f\q\2\7\h\o\p\6\w\7\t\p\h\s\y\p\f\4\x\y\4\1\p\1\4\y\1\7\7\7 ]] 00:08:33.132 00:08:33.132 real 0m1.034s 00:08:33.132 user 0m0.663s 00:08:33.132 sys 0m0.236s 00:08:33.133 05:07:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:33.133 05:07:22 -- common/autotest_common.sh@10 -- # set +x 00:08:33.133 ************************************ 00:08:33.133 05:07:22 -- dd/basic_rw.sh@1 -- # cleanup 00:08:33.133 05:07:22 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:08:33.133 05:07:22 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:33.133 05:07:22 -- dd/common.sh@11 -- # local nvme_ref= 00:08:33.133 05:07:22 -- dd/common.sh@12 -- # local size=0xffff 00:08:33.133 05:07:22 -- dd/common.sh@14 -- # local bs=1048576 00:08:33.133 05:07:22 -- dd/common.sh@15 -- # local count=1 00:08:33.133 05:07:22 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:33.133 05:07:22 -- dd/common.sh@18 -- # gen_conf 00:08:33.133 05:07:22 -- dd/common.sh@31 -- # xtrace_disable 00:08:33.133 05:07:22 -- common/autotest_common.sh@10 -- # set +x 00:08:33.133 [2024-12-08 05:07:22.761416] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:33.133 [2024-12-08 05:07:22.761513] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70119 ] 00:08:33.133 { 00:08:33.133 "subsystems": [ 00:08:33.133 { 00:08:33.133 "subsystem": "bdev", 00:08:33.133 "config": [ 00:08:33.133 { 00:08:33.133 "params": { 00:08:33.133 "trtype": "pcie", 00:08:33.133 "traddr": "0000:00:06.0", 00:08:33.133 "name": "Nvme0" 00:08:33.133 }, 00:08:33.133 "method": "bdev_nvme_attach_controller" 00:08:33.133 }, 00:08:33.133 { 00:08:33.133 "method": "bdev_wait_for_examine" 00:08:33.133 } 00:08:33.133 ] 00:08:33.133 } 00:08:33.133 ] 00:08:33.133 } 00:08:33.133 [2024-12-08 05:07:22.893290] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.392 [2024-12-08 05:07:22.932005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.392  [2024-12-08T05:07:23.439Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:33.653 00:08:33.653 05:07:23 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:33.653 00:08:33.653 real 0m15.115s 00:08:33.653 user 0m10.693s 00:08:33.653 sys 0m2.950s 00:08:33.653 05:07:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:33.653 ************************************ 00:08:33.653 END TEST spdk_dd_basic_rw 00:08:33.653 05:07:23 -- common/autotest_common.sh@10 -- # set +x 00:08:33.653 ************************************ 00:08:33.653 05:07:23 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:08:33.653 05:07:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:33.653 05:07:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:33.653 05:07:23 -- common/autotest_common.sh@10 -- # set +x 00:08:33.653 ************************************ 00:08:33.653 START TEST spdk_dd_posix 00:08:33.653 ************************************ 00:08:33.653 05:07:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:08:33.653 * Looking for test storage... 00:08:33.653 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:33.653 05:07:23 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:33.653 05:07:23 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:33.653 05:07:23 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:33.653 05:07:23 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:33.653 05:07:23 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:33.653 05:07:23 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:33.653 05:07:23 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:33.653 05:07:23 -- scripts/common.sh@335 -- # IFS=.-: 00:08:33.653 05:07:23 -- scripts/common.sh@335 -- # read -ra ver1 00:08:33.653 05:07:23 -- scripts/common.sh@336 -- # IFS=.-: 00:08:33.653 05:07:23 -- scripts/common.sh@336 -- # read -ra ver2 00:08:33.653 05:07:23 -- scripts/common.sh@337 -- # local 'op=<' 00:08:33.653 05:07:23 -- scripts/common.sh@339 -- # ver1_l=2 00:08:33.653 05:07:23 -- scripts/common.sh@340 -- # ver2_l=1 00:08:33.653 05:07:23 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:33.653 05:07:23 -- scripts/common.sh@343 -- # case "$op" in 00:08:33.653 05:07:23 -- scripts/common.sh@344 -- # : 1 00:08:33.653 05:07:23 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:33.653 05:07:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:33.653 05:07:23 -- scripts/common.sh@364 -- # decimal 1 00:08:33.653 05:07:23 -- scripts/common.sh@352 -- # local d=1 00:08:33.653 05:07:23 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:33.653 05:07:23 -- scripts/common.sh@354 -- # echo 1 00:08:33.653 05:07:23 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:33.653 05:07:23 -- scripts/common.sh@365 -- # decimal 2 00:08:33.653 05:07:23 -- scripts/common.sh@352 -- # local d=2 00:08:33.653 05:07:23 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:33.653 05:07:23 -- scripts/common.sh@354 -- # echo 2 00:08:33.653 05:07:23 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:33.653 05:07:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:33.653 05:07:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:33.653 05:07:23 -- scripts/common.sh@367 -- # return 0 00:08:33.653 05:07:23 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:33.653 05:07:23 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:33.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.653 --rc genhtml_branch_coverage=1 00:08:33.653 --rc genhtml_function_coverage=1 00:08:33.653 --rc genhtml_legend=1 00:08:33.653 --rc geninfo_all_blocks=1 00:08:33.653 --rc geninfo_unexecuted_blocks=1 00:08:33.653 00:08:33.653 ' 00:08:33.653 05:07:23 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:33.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.653 --rc genhtml_branch_coverage=1 00:08:33.653 --rc genhtml_function_coverage=1 00:08:33.653 --rc genhtml_legend=1 00:08:33.654 --rc geninfo_all_blocks=1 00:08:33.654 --rc geninfo_unexecuted_blocks=1 00:08:33.654 00:08:33.654 ' 00:08:33.654 05:07:23 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:33.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.654 --rc genhtml_branch_coverage=1 00:08:33.654 --rc genhtml_function_coverage=1 00:08:33.654 --rc genhtml_legend=1 00:08:33.654 --rc geninfo_all_blocks=1 00:08:33.654 --rc geninfo_unexecuted_blocks=1 00:08:33.654 00:08:33.654 ' 00:08:33.654 05:07:23 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:33.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.654 --rc genhtml_branch_coverage=1 00:08:33.654 --rc genhtml_function_coverage=1 00:08:33.654 --rc genhtml_legend=1 00:08:33.654 --rc geninfo_all_blocks=1 00:08:33.654 --rc geninfo_unexecuted_blocks=1 00:08:33.654 00:08:33.654 ' 00:08:33.654 05:07:23 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:33.654 05:07:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:33.654 05:07:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:33.654 05:07:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:33.654 05:07:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.654 05:07:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.654 05:07:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.654 05:07:23 -- paths/export.sh@5 -- # export PATH 00:08:33.654 05:07:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.654 05:07:23 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:08:33.654 05:07:23 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:08:33.654 05:07:23 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:08:33.654 05:07:23 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:08:33.654 05:07:23 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:33.654 05:07:23 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:33.654 05:07:23 -- dd/posix.sh@130 -- # tests 00:08:33.654 05:07:23 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:08:33.654 * First test run, liburing in use 00:08:33.654 05:07:23 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:08:33.654 05:07:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:33.654 05:07:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:33.654 05:07:23 -- common/autotest_common.sh@10 -- # set +x 00:08:33.913 ************************************ 00:08:33.913 START TEST dd_flag_append 00:08:33.913 ************************************ 00:08:33.913 05:07:23 -- common/autotest_common.sh@1114 -- # append 00:08:33.913 05:07:23 -- dd/posix.sh@16 -- # local dump0 00:08:33.913 05:07:23 -- dd/posix.sh@17 -- # local dump1 00:08:33.913 05:07:23 -- dd/posix.sh@19 -- # gen_bytes 32 00:08:33.913 05:07:23 -- dd/common.sh@98 -- # xtrace_disable 00:08:33.913 05:07:23 -- common/autotest_common.sh@10 -- # set +x 00:08:33.913 05:07:23 -- dd/posix.sh@19 -- # dump0=o9ff4et0spqrpih8idrep5wtc9paa5df 00:08:33.913 05:07:23 -- dd/posix.sh@20 -- # gen_bytes 32 00:08:33.913 05:07:23 -- dd/common.sh@98 -- # xtrace_disable 00:08:33.913 05:07:23 -- common/autotest_common.sh@10 -- # set +x 00:08:33.913 05:07:23 -- dd/posix.sh@20 -- # dump1=cvkfidk3b35gf8p5rf2mi2xzfelcdt6d 00:08:33.913 05:07:23 -- dd/posix.sh@22 -- # printf %s o9ff4et0spqrpih8idrep5wtc9paa5df 00:08:33.913 05:07:23 -- dd/posix.sh@23 -- # printf %s cvkfidk3b35gf8p5rf2mi2xzfelcdt6d 00:08:33.913 05:07:23 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:08:33.913 [2024-12-08 05:07:23.491911] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:33.914 [2024-12-08 05:07:23.492004] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70189 ] 00:08:33.914 [2024-12-08 05:07:23.624069] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.914 [2024-12-08 05:07:23.662182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.178  [2024-12-08T05:07:23.964Z] Copying: 32/32 [B] (average 31 kBps) 00:08:34.178 00:08:34.178 05:07:23 -- dd/posix.sh@27 -- # [[ cvkfidk3b35gf8p5rf2mi2xzfelcdt6do9ff4et0spqrpih8idrep5wtc9paa5df == \c\v\k\f\i\d\k\3\b\3\5\g\f\8\p\5\r\f\2\m\i\2\x\z\f\e\l\c\d\t\6\d\o\9\f\f\4\e\t\0\s\p\q\r\p\i\h\8\i\d\r\e\p\5\w\t\c\9\p\a\a\5\d\f ]] 00:08:34.178 00:08:34.178 real 0m0.412s 00:08:34.178 user 0m0.191s 00:08:34.178 sys 0m0.099s 00:08:34.178 05:07:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:34.178 05:07:23 -- common/autotest_common.sh@10 -- # set +x 00:08:34.178 ************************************ 00:08:34.178 END TEST dd_flag_append 00:08:34.178 ************************************ 00:08:34.178 05:07:23 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:08:34.178 05:07:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:34.178 05:07:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:34.178 05:07:23 -- common/autotest_common.sh@10 -- # set +x 00:08:34.178 ************************************ 00:08:34.178 START TEST dd_flag_directory 00:08:34.178 ************************************ 00:08:34.178 05:07:23 -- common/autotest_common.sh@1114 -- # directory 00:08:34.178 05:07:23 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:34.178 05:07:23 -- common/autotest_common.sh@650 -- # local es=0 00:08:34.178 05:07:23 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:34.178 05:07:23 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.178 05:07:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:34.179 05:07:23 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.179 05:07:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:34.179 05:07:23 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.179 05:07:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:34.179 05:07:23 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.179 05:07:23 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:34.179 05:07:23 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:34.179 [2024-12-08 05:07:23.955046] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:34.179 [2024-12-08 05:07:23.955167] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70210 ] 00:08:34.438 [2024-12-08 05:07:24.087201] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.438 [2024-12-08 05:07:24.125065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.438 [2024-12-08 05:07:24.171231] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:34.438 [2024-12-08 05:07:24.171288] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:34.438 [2024-12-08 05:07:24.171318] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:34.697 [2024-12-08 05:07:24.232149] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:34.697 05:07:24 -- common/autotest_common.sh@653 -- # es=236 00:08:34.697 05:07:24 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:34.697 05:07:24 -- common/autotest_common.sh@662 -- # es=108 00:08:34.697 05:07:24 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:34.697 05:07:24 -- common/autotest_common.sh@670 -- # es=1 00:08:34.697 05:07:24 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:34.697 05:07:24 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:34.697 05:07:24 -- common/autotest_common.sh@650 -- # local es=0 00:08:34.697 05:07:24 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:34.697 05:07:24 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.697 05:07:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:34.697 05:07:24 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.697 05:07:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:34.697 05:07:24 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.697 05:07:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:34.697 05:07:24 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.697 05:07:24 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:34.697 05:07:24 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:34.697 [2024-12-08 05:07:24.348328] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:34.697 [2024-12-08 05:07:24.348426] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70220 ] 00:08:34.957 [2024-12-08 05:07:24.486188] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.957 [2024-12-08 05:07:24.524334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.957 [2024-12-08 05:07:24.574680] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:34.957 [2024-12-08 05:07:24.574811] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:34.957 [2024-12-08 05:07:24.574826] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:34.957 [2024-12-08 05:07:24.636013] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:34.957 05:07:24 -- common/autotest_common.sh@653 -- # es=236 00:08:34.957 05:07:24 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:34.957 05:07:24 -- common/autotest_common.sh@662 -- # es=108 00:08:34.957 05:07:24 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:34.957 05:07:24 -- common/autotest_common.sh@670 -- # es=1 00:08:34.957 05:07:24 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:34.957 00:08:34.957 real 0m0.790s 00:08:34.957 user 0m0.384s 00:08:34.957 sys 0m0.198s 00:08:34.957 05:07:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:34.957 05:07:24 -- common/autotest_common.sh@10 -- # set +x 00:08:34.957 ************************************ 00:08:34.957 END TEST dd_flag_directory 00:08:34.957 ************************************ 00:08:34.957 05:07:24 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:08:34.957 05:07:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:34.957 05:07:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:34.957 05:07:24 -- common/autotest_common.sh@10 -- # set +x 00:08:35.216 ************************************ 00:08:35.216 START TEST dd_flag_nofollow 00:08:35.216 ************************************ 00:08:35.216 05:07:24 -- common/autotest_common.sh@1114 -- # nofollow 00:08:35.216 05:07:24 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:35.216 05:07:24 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:35.216 05:07:24 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:35.216 05:07:24 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:35.216 05:07:24 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:35.216 05:07:24 -- common/autotest_common.sh@650 -- # local es=0 00:08:35.216 05:07:24 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:35.217 05:07:24 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.217 05:07:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:35.217 05:07:24 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.217 05:07:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:35.217 05:07:24 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.217 05:07:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:35.217 05:07:24 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.217 05:07:24 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:35.217 05:07:24 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:35.217 [2024-12-08 05:07:24.808462] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:35.217 [2024-12-08 05:07:24.808555] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70248 ] 00:08:35.217 [2024-12-08 05:07:24.948261] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.217 [2024-12-08 05:07:24.988243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.476 [2024-12-08 05:07:25.041865] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:35.476 [2024-12-08 05:07:25.041934] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:35.476 [2024-12-08 05:07:25.041977] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:35.476 [2024-12-08 05:07:25.101533] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:35.476 05:07:25 -- common/autotest_common.sh@653 -- # es=216 00:08:35.476 05:07:25 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:35.476 05:07:25 -- common/autotest_common.sh@662 -- # es=88 00:08:35.476 05:07:25 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:35.476 05:07:25 -- common/autotest_common.sh@670 -- # es=1 00:08:35.476 05:07:25 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:35.476 05:07:25 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:35.476 05:07:25 -- common/autotest_common.sh@650 -- # local es=0 00:08:35.476 05:07:25 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:35.476 05:07:25 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.476 05:07:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:35.476 05:07:25 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.476 05:07:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:35.476 05:07:25 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.476 05:07:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:35.476 05:07:25 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.476 05:07:25 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:35.476 05:07:25 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:35.476 [2024-12-08 05:07:25.225412] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:35.476 [2024-12-08 05:07:25.225502] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70252 ] 00:08:35.735 [2024-12-08 05:07:25.363723] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.735 [2024-12-08 05:07:25.404429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.735 [2024-12-08 05:07:25.454036] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:35.735 [2024-12-08 05:07:25.454125] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:35.735 [2024-12-08 05:07:25.454170] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:35.735 [2024-12-08 05:07:25.515146] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:35.995 05:07:25 -- common/autotest_common.sh@653 -- # es=216 00:08:35.995 05:07:25 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:35.995 05:07:25 -- common/autotest_common.sh@662 -- # es=88 00:08:35.995 05:07:25 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:35.995 05:07:25 -- common/autotest_common.sh@670 -- # es=1 00:08:35.995 05:07:25 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:35.995 05:07:25 -- dd/posix.sh@46 -- # gen_bytes 512 00:08:35.995 05:07:25 -- dd/common.sh@98 -- # xtrace_disable 00:08:35.995 05:07:25 -- common/autotest_common.sh@10 -- # set +x 00:08:35.995 05:07:25 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:35.995 [2024-12-08 05:07:25.643840] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:35.995 [2024-12-08 05:07:25.643941] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70265 ] 00:08:35.995 [2024-12-08 05:07:25.779513] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.253 [2024-12-08 05:07:25.815600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.253  [2024-12-08T05:07:26.039Z] Copying: 512/512 [B] (average 500 kBps) 00:08:36.253 00:08:36.253 05:07:26 -- dd/posix.sh@49 -- # [[ tu5amd8kk1s3zaylkpqmhvn8oaxivtkdybgm7z7j1m2lahysprw46katj3b9a2rc38opls3hk90u4m32uht7ncmic6cjkjhreaiy61uzv96bec48qv6zej0lxncj8jzhl3si8w0ytj5jtugmtee2dv13etoh27rsdsfdti3govt8qe2lzwt9jq9y8qiu8gs0h6vzlxx2pvnxwoxc7fgnu1kuwm22r7e1hr7ajvxnn7ifv2ug1lx029xwji3fopa5xrhwkwd0le4afukzqrcyucebqarkcjmou8px2aftr4qrhnwbxu5xsl4j7bgf2f1lelmqjqfuji8mexgtnn5qo2y9ywphm0qv3gboix7dv4skrh7pu2hkhuwq5lfej4kab7psjr2rilzodjj42yvc1n1ogiuzykjz9sitga60wwss2zy50n5gu2e238kzvk2llozpbnml0t4a665y7lvnrcktqe2ga9gth0jtk34f6bzrtj0jkq15rd0omr7fmw1m == \t\u\5\a\m\d\8\k\k\1\s\3\z\a\y\l\k\p\q\m\h\v\n\8\o\a\x\i\v\t\k\d\y\b\g\m\7\z\7\j\1\m\2\l\a\h\y\s\p\r\w\4\6\k\a\t\j\3\b\9\a\2\r\c\3\8\o\p\l\s\3\h\k\9\0\u\4\m\3\2\u\h\t\7\n\c\m\i\c\6\c\j\k\j\h\r\e\a\i\y\6\1\u\z\v\9\6\b\e\c\4\8\q\v\6\z\e\j\0\l\x\n\c\j\8\j\z\h\l\3\s\i\8\w\0\y\t\j\5\j\t\u\g\m\t\e\e\2\d\v\1\3\e\t\o\h\2\7\r\s\d\s\f\d\t\i\3\g\o\v\t\8\q\e\2\l\z\w\t\9\j\q\9\y\8\q\i\u\8\g\s\0\h\6\v\z\l\x\x\2\p\v\n\x\w\o\x\c\7\f\g\n\u\1\k\u\w\m\2\2\r\7\e\1\h\r\7\a\j\v\x\n\n\7\i\f\v\2\u\g\1\l\x\0\2\9\x\w\j\i\3\f\o\p\a\5\x\r\h\w\k\w\d\0\l\e\4\a\f\u\k\z\q\r\c\y\u\c\e\b\q\a\r\k\c\j\m\o\u\8\p\x\2\a\f\t\r\4\q\r\h\n\w\b\x\u\5\x\s\l\4\j\7\b\g\f\2\f\1\l\e\l\m\q\j\q\f\u\j\i\8\m\e\x\g\t\n\n\5\q\o\2\y\9\y\w\p\h\m\0\q\v\3\g\b\o\i\x\7\d\v\4\s\k\r\h\7\p\u\2\h\k\h\u\w\q\5\l\f\e\j\4\k\a\b\7\p\s\j\r\2\r\i\l\z\o\d\j\j\4\2\y\v\c\1\n\1\o\g\i\u\z\y\k\j\z\9\s\i\t\g\a\6\0\w\w\s\s\2\z\y\5\0\n\5\g\u\2\e\2\3\8\k\z\v\k\2\l\l\o\z\p\b\n\m\l\0\t\4\a\6\6\5\y\7\l\v\n\r\c\k\t\q\e\2\g\a\9\g\t\h\0\j\t\k\3\4\f\6\b\z\r\t\j\0\j\k\q\1\5\r\d\0\o\m\r\7\f\m\w\1\m ]] 00:08:36.253 00:08:36.253 real 0m1.257s 00:08:36.253 user 0m0.607s 00:08:36.253 sys 0m0.319s 00:08:36.253 05:07:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:36.253 ************************************ 00:08:36.253 END TEST dd_flag_nofollow 00:08:36.253 ************************************ 00:08:36.253 05:07:26 -- common/autotest_common.sh@10 -- # set +x 00:08:36.511 05:07:26 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:08:36.511 05:07:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:36.511 05:07:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:36.511 05:07:26 -- common/autotest_common.sh@10 -- # set +x 00:08:36.511 ************************************ 00:08:36.511 START TEST dd_flag_noatime 00:08:36.511 ************************************ 00:08:36.511 05:07:26 -- common/autotest_common.sh@1114 -- # noatime 00:08:36.511 05:07:26 -- dd/posix.sh@53 -- # local atime_if 00:08:36.511 05:07:26 -- dd/posix.sh@54 -- # local atime_of 00:08:36.511 05:07:26 -- dd/posix.sh@58 -- # gen_bytes 512 00:08:36.511 05:07:26 -- dd/common.sh@98 -- # xtrace_disable 00:08:36.511 05:07:26 -- common/autotest_common.sh@10 -- # set +x 00:08:36.511 05:07:26 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:36.511 05:07:26 -- dd/posix.sh@60 -- # atime_if=1733634445 00:08:36.511 05:07:26 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:36.511 05:07:26 -- dd/posix.sh@61 -- # atime_of=1733634446 00:08:36.511 05:07:26 -- dd/posix.sh@66 -- # sleep 1 00:08:37.446 05:07:27 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:37.446 [2024-12-08 05:07:27.138229] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:37.446 [2024-12-08 05:07:27.138339] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70300 ] 00:08:37.704 [2024-12-08 05:07:27.275331] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.704 [2024-12-08 05:07:27.311511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.704  [2024-12-08T05:07:27.749Z] Copying: 512/512 [B] (average 500 kBps) 00:08:37.963 00:08:37.963 05:07:27 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:37.963 05:07:27 -- dd/posix.sh@69 -- # (( atime_if == 1733634445 )) 00:08:37.963 05:07:27 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:37.963 05:07:27 -- dd/posix.sh@70 -- # (( atime_of == 1733634446 )) 00:08:37.963 05:07:27 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:37.963 [2024-12-08 05:07:27.555600] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:37.963 [2024-12-08 05:07:27.555721] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70312 ] 00:08:37.963 [2024-12-08 05:07:27.689055] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.963 [2024-12-08 05:07:27.725162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.222  [2024-12-08T05:07:28.008Z] Copying: 512/512 [B] (average 500 kBps) 00:08:38.222 00:08:38.222 05:07:27 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:38.222 05:07:27 -- dd/posix.sh@73 -- # (( atime_if < 1733634447 )) 00:08:38.222 00:08:38.222 real 0m1.873s 00:08:38.222 user 0m0.406s 00:08:38.222 sys 0m0.223s 00:08:38.222 05:07:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:38.222 05:07:27 -- common/autotest_common.sh@10 -- # set +x 00:08:38.222 ************************************ 00:08:38.222 END TEST dd_flag_noatime 00:08:38.222 ************************************ 00:08:38.222 05:07:27 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:08:38.222 05:07:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:38.222 05:07:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:38.222 05:07:27 -- common/autotest_common.sh@10 -- # set +x 00:08:38.222 ************************************ 00:08:38.222 START TEST dd_flags_misc 00:08:38.222 ************************************ 00:08:38.222 05:07:27 -- common/autotest_common.sh@1114 -- # io 00:08:38.222 05:07:27 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:38.222 05:07:27 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:38.222 05:07:27 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:38.222 05:07:27 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:38.222 05:07:27 -- dd/posix.sh@86 -- # gen_bytes 512 00:08:38.222 05:07:27 -- dd/common.sh@98 -- # xtrace_disable 00:08:38.222 05:07:27 -- common/autotest_common.sh@10 -- # set +x 00:08:38.222 05:07:27 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:38.222 05:07:27 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:38.481 [2024-12-08 05:07:28.040383] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:38.481 [2024-12-08 05:07:28.040484] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70338 ] 00:08:38.481 [2024-12-08 05:07:28.172998] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.481 [2024-12-08 05:07:28.207802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.481  [2024-12-08T05:07:28.525Z] Copying: 512/512 [B] (average 500 kBps) 00:08:38.739 00:08:38.739 05:07:28 -- dd/posix.sh@93 -- # [[ x322dmenjk60f6dms9z7tlcop1qwzmcvh70x3sfaxfo2exg32h0g87641dg8f7bsbnk5hjutitnadkfc3aql1j5cacf7cp7qx315qc1fq2cqb490gdk62ixcb542werg5k96bk742cavjkladm64umd13d3p8d8mly9jl9clldd70jymrwoogbt62ak18ewuljtliaz3mao9tf7u2bb1i1z5h7t8kt08c1lg6aisukg74q4e7qfowoykl3osp8i05kq8un0ulxgpzpuuvrd5nolz4xx0my044j2ge7ronsayg0thz0mwyil2sbadjzqebyd49y7lmo5tlxqvxjq5k84fk9wtcfat06h27nfywqqkvt4xp114uhm7exf6sxq5845tbvngexzuhom77vxp7bvoddp4ernfk70qrhxc5guf1y2i1bmk6cg3x2tyie9drrb3pgofy0kut37dacoqx5jcbbsotis4p4zz0qxy66uyrs3x36rjk4xbejsczl64 == \x\3\2\2\d\m\e\n\j\k\6\0\f\6\d\m\s\9\z\7\t\l\c\o\p\1\q\w\z\m\c\v\h\7\0\x\3\s\f\a\x\f\o\2\e\x\g\3\2\h\0\g\8\7\6\4\1\d\g\8\f\7\b\s\b\n\k\5\h\j\u\t\i\t\n\a\d\k\f\c\3\a\q\l\1\j\5\c\a\c\f\7\c\p\7\q\x\3\1\5\q\c\1\f\q\2\c\q\b\4\9\0\g\d\k\6\2\i\x\c\b\5\4\2\w\e\r\g\5\k\9\6\b\k\7\4\2\c\a\v\j\k\l\a\d\m\6\4\u\m\d\1\3\d\3\p\8\d\8\m\l\y\9\j\l\9\c\l\l\d\d\7\0\j\y\m\r\w\o\o\g\b\t\6\2\a\k\1\8\e\w\u\l\j\t\l\i\a\z\3\m\a\o\9\t\f\7\u\2\b\b\1\i\1\z\5\h\7\t\8\k\t\0\8\c\1\l\g\6\a\i\s\u\k\g\7\4\q\4\e\7\q\f\o\w\o\y\k\l\3\o\s\p\8\i\0\5\k\q\8\u\n\0\u\l\x\g\p\z\p\u\u\v\r\d\5\n\o\l\z\4\x\x\0\m\y\0\4\4\j\2\g\e\7\r\o\n\s\a\y\g\0\t\h\z\0\m\w\y\i\l\2\s\b\a\d\j\z\q\e\b\y\d\4\9\y\7\l\m\o\5\t\l\x\q\v\x\j\q\5\k\8\4\f\k\9\w\t\c\f\a\t\0\6\h\2\7\n\f\y\w\q\q\k\v\t\4\x\p\1\1\4\u\h\m\7\e\x\f\6\s\x\q\5\8\4\5\t\b\v\n\g\e\x\z\u\h\o\m\7\7\v\x\p\7\b\v\o\d\d\p\4\e\r\n\f\k\7\0\q\r\h\x\c\5\g\u\f\1\y\2\i\1\b\m\k\6\c\g\3\x\2\t\y\i\e\9\d\r\r\b\3\p\g\o\f\y\0\k\u\t\3\7\d\a\c\o\q\x\5\j\c\b\b\s\o\t\i\s\4\p\4\z\z\0\q\x\y\6\6\u\y\r\s\3\x\3\6\r\j\k\4\x\b\e\j\s\c\z\l\6\4 ]] 00:08:38.739 05:07:28 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:38.739 05:07:28 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:38.739 [2024-12-08 05:07:28.474053] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:38.739 [2024-12-08 05:07:28.474196] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70346 ] 00:08:38.997 [2024-12-08 05:07:28.612812] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.997 [2024-12-08 05:07:28.649427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.997  [2024-12-08T05:07:29.041Z] Copying: 512/512 [B] (average 500 kBps) 00:08:39.255 00:08:39.255 05:07:28 -- dd/posix.sh@93 -- # [[ x322dmenjk60f6dms9z7tlcop1qwzmcvh70x3sfaxfo2exg32h0g87641dg8f7bsbnk5hjutitnadkfc3aql1j5cacf7cp7qx315qc1fq2cqb490gdk62ixcb542werg5k96bk742cavjkladm64umd13d3p8d8mly9jl9clldd70jymrwoogbt62ak18ewuljtliaz3mao9tf7u2bb1i1z5h7t8kt08c1lg6aisukg74q4e7qfowoykl3osp8i05kq8un0ulxgpzpuuvrd5nolz4xx0my044j2ge7ronsayg0thz0mwyil2sbadjzqebyd49y7lmo5tlxqvxjq5k84fk9wtcfat06h27nfywqqkvt4xp114uhm7exf6sxq5845tbvngexzuhom77vxp7bvoddp4ernfk70qrhxc5guf1y2i1bmk6cg3x2tyie9drrb3pgofy0kut37dacoqx5jcbbsotis4p4zz0qxy66uyrs3x36rjk4xbejsczl64 == \x\3\2\2\d\m\e\n\j\k\6\0\f\6\d\m\s\9\z\7\t\l\c\o\p\1\q\w\z\m\c\v\h\7\0\x\3\s\f\a\x\f\o\2\e\x\g\3\2\h\0\g\8\7\6\4\1\d\g\8\f\7\b\s\b\n\k\5\h\j\u\t\i\t\n\a\d\k\f\c\3\a\q\l\1\j\5\c\a\c\f\7\c\p\7\q\x\3\1\5\q\c\1\f\q\2\c\q\b\4\9\0\g\d\k\6\2\i\x\c\b\5\4\2\w\e\r\g\5\k\9\6\b\k\7\4\2\c\a\v\j\k\l\a\d\m\6\4\u\m\d\1\3\d\3\p\8\d\8\m\l\y\9\j\l\9\c\l\l\d\d\7\0\j\y\m\r\w\o\o\g\b\t\6\2\a\k\1\8\e\w\u\l\j\t\l\i\a\z\3\m\a\o\9\t\f\7\u\2\b\b\1\i\1\z\5\h\7\t\8\k\t\0\8\c\1\l\g\6\a\i\s\u\k\g\7\4\q\4\e\7\q\f\o\w\o\y\k\l\3\o\s\p\8\i\0\5\k\q\8\u\n\0\u\l\x\g\p\z\p\u\u\v\r\d\5\n\o\l\z\4\x\x\0\m\y\0\4\4\j\2\g\e\7\r\o\n\s\a\y\g\0\t\h\z\0\m\w\y\i\l\2\s\b\a\d\j\z\q\e\b\y\d\4\9\y\7\l\m\o\5\t\l\x\q\v\x\j\q\5\k\8\4\f\k\9\w\t\c\f\a\t\0\6\h\2\7\n\f\y\w\q\q\k\v\t\4\x\p\1\1\4\u\h\m\7\e\x\f\6\s\x\q\5\8\4\5\t\b\v\n\g\e\x\z\u\h\o\m\7\7\v\x\p\7\b\v\o\d\d\p\4\e\r\n\f\k\7\0\q\r\h\x\c\5\g\u\f\1\y\2\i\1\b\m\k\6\c\g\3\x\2\t\y\i\e\9\d\r\r\b\3\p\g\o\f\y\0\k\u\t\3\7\d\a\c\o\q\x\5\j\c\b\b\s\o\t\i\s\4\p\4\z\z\0\q\x\y\6\6\u\y\r\s\3\x\3\6\r\j\k\4\x\b\e\j\s\c\z\l\6\4 ]] 00:08:39.255 05:07:28 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:39.255 05:07:28 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:39.255 [2024-12-08 05:07:28.893224] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:39.255 [2024-12-08 05:07:28.893337] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70353 ] 00:08:39.255 [2024-12-08 05:07:29.031816] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.515 [2024-12-08 05:07:29.067894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.515  [2024-12-08T05:07:29.301Z] Copying: 512/512 [B] (average 166 kBps) 00:08:39.515 00:08:39.515 05:07:29 -- dd/posix.sh@93 -- # [[ x322dmenjk60f6dms9z7tlcop1qwzmcvh70x3sfaxfo2exg32h0g87641dg8f7bsbnk5hjutitnadkfc3aql1j5cacf7cp7qx315qc1fq2cqb490gdk62ixcb542werg5k96bk742cavjkladm64umd13d3p8d8mly9jl9clldd70jymrwoogbt62ak18ewuljtliaz3mao9tf7u2bb1i1z5h7t8kt08c1lg6aisukg74q4e7qfowoykl3osp8i05kq8un0ulxgpzpuuvrd5nolz4xx0my044j2ge7ronsayg0thz0mwyil2sbadjzqebyd49y7lmo5tlxqvxjq5k84fk9wtcfat06h27nfywqqkvt4xp114uhm7exf6sxq5845tbvngexzuhom77vxp7bvoddp4ernfk70qrhxc5guf1y2i1bmk6cg3x2tyie9drrb3pgofy0kut37dacoqx5jcbbsotis4p4zz0qxy66uyrs3x36rjk4xbejsczl64 == \x\3\2\2\d\m\e\n\j\k\6\0\f\6\d\m\s\9\z\7\t\l\c\o\p\1\q\w\z\m\c\v\h\7\0\x\3\s\f\a\x\f\o\2\e\x\g\3\2\h\0\g\8\7\6\4\1\d\g\8\f\7\b\s\b\n\k\5\h\j\u\t\i\t\n\a\d\k\f\c\3\a\q\l\1\j\5\c\a\c\f\7\c\p\7\q\x\3\1\5\q\c\1\f\q\2\c\q\b\4\9\0\g\d\k\6\2\i\x\c\b\5\4\2\w\e\r\g\5\k\9\6\b\k\7\4\2\c\a\v\j\k\l\a\d\m\6\4\u\m\d\1\3\d\3\p\8\d\8\m\l\y\9\j\l\9\c\l\l\d\d\7\0\j\y\m\r\w\o\o\g\b\t\6\2\a\k\1\8\e\w\u\l\j\t\l\i\a\z\3\m\a\o\9\t\f\7\u\2\b\b\1\i\1\z\5\h\7\t\8\k\t\0\8\c\1\l\g\6\a\i\s\u\k\g\7\4\q\4\e\7\q\f\o\w\o\y\k\l\3\o\s\p\8\i\0\5\k\q\8\u\n\0\u\l\x\g\p\z\p\u\u\v\r\d\5\n\o\l\z\4\x\x\0\m\y\0\4\4\j\2\g\e\7\r\o\n\s\a\y\g\0\t\h\z\0\m\w\y\i\l\2\s\b\a\d\j\z\q\e\b\y\d\4\9\y\7\l\m\o\5\t\l\x\q\v\x\j\q\5\k\8\4\f\k\9\w\t\c\f\a\t\0\6\h\2\7\n\f\y\w\q\q\k\v\t\4\x\p\1\1\4\u\h\m\7\e\x\f\6\s\x\q\5\8\4\5\t\b\v\n\g\e\x\z\u\h\o\m\7\7\v\x\p\7\b\v\o\d\d\p\4\e\r\n\f\k\7\0\q\r\h\x\c\5\g\u\f\1\y\2\i\1\b\m\k\6\c\g\3\x\2\t\y\i\e\9\d\r\r\b\3\p\g\o\f\y\0\k\u\t\3\7\d\a\c\o\q\x\5\j\c\b\b\s\o\t\i\s\4\p\4\z\z\0\q\x\y\6\6\u\y\r\s\3\x\3\6\r\j\k\4\x\b\e\j\s\c\z\l\6\4 ]] 00:08:39.515 05:07:29 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:39.515 05:07:29 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:39.773 [2024-12-08 05:07:29.323998] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:39.773 [2024-12-08 05:07:29.324090] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70355 ] 00:08:39.773 [2024-12-08 05:07:29.463668] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.773 [2024-12-08 05:07:29.505059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.773  [2024-12-08T05:07:29.819Z] Copying: 512/512 [B] (average 500 kBps) 00:08:40.033 00:08:40.033 05:07:29 -- dd/posix.sh@93 -- # [[ x322dmenjk60f6dms9z7tlcop1qwzmcvh70x3sfaxfo2exg32h0g87641dg8f7bsbnk5hjutitnadkfc3aql1j5cacf7cp7qx315qc1fq2cqb490gdk62ixcb542werg5k96bk742cavjkladm64umd13d3p8d8mly9jl9clldd70jymrwoogbt62ak18ewuljtliaz3mao9tf7u2bb1i1z5h7t8kt08c1lg6aisukg74q4e7qfowoykl3osp8i05kq8un0ulxgpzpuuvrd5nolz4xx0my044j2ge7ronsayg0thz0mwyil2sbadjzqebyd49y7lmo5tlxqvxjq5k84fk9wtcfat06h27nfywqqkvt4xp114uhm7exf6sxq5845tbvngexzuhom77vxp7bvoddp4ernfk70qrhxc5guf1y2i1bmk6cg3x2tyie9drrb3pgofy0kut37dacoqx5jcbbsotis4p4zz0qxy66uyrs3x36rjk4xbejsczl64 == \x\3\2\2\d\m\e\n\j\k\6\0\f\6\d\m\s\9\z\7\t\l\c\o\p\1\q\w\z\m\c\v\h\7\0\x\3\s\f\a\x\f\o\2\e\x\g\3\2\h\0\g\8\7\6\4\1\d\g\8\f\7\b\s\b\n\k\5\h\j\u\t\i\t\n\a\d\k\f\c\3\a\q\l\1\j\5\c\a\c\f\7\c\p\7\q\x\3\1\5\q\c\1\f\q\2\c\q\b\4\9\0\g\d\k\6\2\i\x\c\b\5\4\2\w\e\r\g\5\k\9\6\b\k\7\4\2\c\a\v\j\k\l\a\d\m\6\4\u\m\d\1\3\d\3\p\8\d\8\m\l\y\9\j\l\9\c\l\l\d\d\7\0\j\y\m\r\w\o\o\g\b\t\6\2\a\k\1\8\e\w\u\l\j\t\l\i\a\z\3\m\a\o\9\t\f\7\u\2\b\b\1\i\1\z\5\h\7\t\8\k\t\0\8\c\1\l\g\6\a\i\s\u\k\g\7\4\q\4\e\7\q\f\o\w\o\y\k\l\3\o\s\p\8\i\0\5\k\q\8\u\n\0\u\l\x\g\p\z\p\u\u\v\r\d\5\n\o\l\z\4\x\x\0\m\y\0\4\4\j\2\g\e\7\r\o\n\s\a\y\g\0\t\h\z\0\m\w\y\i\l\2\s\b\a\d\j\z\q\e\b\y\d\4\9\y\7\l\m\o\5\t\l\x\q\v\x\j\q\5\k\8\4\f\k\9\w\t\c\f\a\t\0\6\h\2\7\n\f\y\w\q\q\k\v\t\4\x\p\1\1\4\u\h\m\7\e\x\f\6\s\x\q\5\8\4\5\t\b\v\n\g\e\x\z\u\h\o\m\7\7\v\x\p\7\b\v\o\d\d\p\4\e\r\n\f\k\7\0\q\r\h\x\c\5\g\u\f\1\y\2\i\1\b\m\k\6\c\g\3\x\2\t\y\i\e\9\d\r\r\b\3\p\g\o\f\y\0\k\u\t\3\7\d\a\c\o\q\x\5\j\c\b\b\s\o\t\i\s\4\p\4\z\z\0\q\x\y\6\6\u\y\r\s\3\x\3\6\r\j\k\4\x\b\e\j\s\c\z\l\6\4 ]] 00:08:40.033 05:07:29 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:40.033 05:07:29 -- dd/posix.sh@86 -- # gen_bytes 512 00:08:40.033 05:07:29 -- dd/common.sh@98 -- # xtrace_disable 00:08:40.033 05:07:29 -- common/autotest_common.sh@10 -- # set +x 00:08:40.033 05:07:29 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:40.033 05:07:29 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:40.033 [2024-12-08 05:07:29.757294] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:40.033 [2024-12-08 05:07:29.757388] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70367 ] 00:08:40.292 [2024-12-08 05:07:29.897607] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.292 [2024-12-08 05:07:29.941893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.292  [2024-12-08T05:07:30.336Z] Copying: 512/512 [B] (average 500 kBps) 00:08:40.550 00:08:40.550 05:07:30 -- dd/posix.sh@93 -- # [[ 90vv5gokefn1erov28kiskjgf8ztcunapboloa7vkdpy7equflrsvynolorwc8n5iu4179iosmalfv8pg5eocy7aw1lxfoixpankrrfz66s5mml8z2mkyrab6gc7lacmk5w4c6af5b6pwero63exmtr0oojgke750niikv88b6glw5r5kuzpa9x0jhc2anbhfja3f8tvxwpyuw4qbkon9e9owzskud7el2zwnyns8atsm25a4nn3myy4u3vr4zzw14hr10jrdewns85ogdbpa9qyzqktf7yes2z03qetl9431da82rks7ro9wqajt4lb93fiusrefsqiprba4jtia6gis06g1k3li9owb82q20ppnr1p9xqutodwpnlf1k0qc6ytzhv2b39xok6143wt0yx0x4667x2x18ndwd6xqysyj991bt06wh2uygk8tn3qim28jmose265stm5engdi2mm48j01nnitsprvhvgu13m2by8y4ves5vvlj7kjmb7 == \9\0\v\v\5\g\o\k\e\f\n\1\e\r\o\v\2\8\k\i\s\k\j\g\f\8\z\t\c\u\n\a\p\b\o\l\o\a\7\v\k\d\p\y\7\e\q\u\f\l\r\s\v\y\n\o\l\o\r\w\c\8\n\5\i\u\4\1\7\9\i\o\s\m\a\l\f\v\8\p\g\5\e\o\c\y\7\a\w\1\l\x\f\o\i\x\p\a\n\k\r\r\f\z\6\6\s\5\m\m\l\8\z\2\m\k\y\r\a\b\6\g\c\7\l\a\c\m\k\5\w\4\c\6\a\f\5\b\6\p\w\e\r\o\6\3\e\x\m\t\r\0\o\o\j\g\k\e\7\5\0\n\i\i\k\v\8\8\b\6\g\l\w\5\r\5\k\u\z\p\a\9\x\0\j\h\c\2\a\n\b\h\f\j\a\3\f\8\t\v\x\w\p\y\u\w\4\q\b\k\o\n\9\e\9\o\w\z\s\k\u\d\7\e\l\2\z\w\n\y\n\s\8\a\t\s\m\2\5\a\4\n\n\3\m\y\y\4\u\3\v\r\4\z\z\w\1\4\h\r\1\0\j\r\d\e\w\n\s\8\5\o\g\d\b\p\a\9\q\y\z\q\k\t\f\7\y\e\s\2\z\0\3\q\e\t\l\9\4\3\1\d\a\8\2\r\k\s\7\r\o\9\w\q\a\j\t\4\l\b\9\3\f\i\u\s\r\e\f\s\q\i\p\r\b\a\4\j\t\i\a\6\g\i\s\0\6\g\1\k\3\l\i\9\o\w\b\8\2\q\2\0\p\p\n\r\1\p\9\x\q\u\t\o\d\w\p\n\l\f\1\k\0\q\c\6\y\t\z\h\v\2\b\3\9\x\o\k\6\1\4\3\w\t\0\y\x\0\x\4\6\6\7\x\2\x\1\8\n\d\w\d\6\x\q\y\s\y\j\9\9\1\b\t\0\6\w\h\2\u\y\g\k\8\t\n\3\q\i\m\2\8\j\m\o\s\e\2\6\5\s\t\m\5\e\n\g\d\i\2\m\m\4\8\j\0\1\n\n\i\t\s\p\r\v\h\v\g\u\1\3\m\2\b\y\8\y\4\v\e\s\5\v\v\l\j\7\k\j\m\b\7 ]] 00:08:40.550 05:07:30 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:40.550 05:07:30 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:40.550 [2024-12-08 05:07:30.206133] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:40.550 [2024-12-08 05:07:30.206206] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70370 ] 00:08:40.808 [2024-12-08 05:07:30.338101] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.808 [2024-12-08 05:07:30.376027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.808  [2024-12-08T05:07:30.594Z] Copying: 512/512 [B] (average 500 kBps) 00:08:40.808 00:08:40.808 05:07:30 -- dd/posix.sh@93 -- # [[ 90vv5gokefn1erov28kiskjgf8ztcunapboloa7vkdpy7equflrsvynolorwc8n5iu4179iosmalfv8pg5eocy7aw1lxfoixpankrrfz66s5mml8z2mkyrab6gc7lacmk5w4c6af5b6pwero63exmtr0oojgke750niikv88b6glw5r5kuzpa9x0jhc2anbhfja3f8tvxwpyuw4qbkon9e9owzskud7el2zwnyns8atsm25a4nn3myy4u3vr4zzw14hr10jrdewns85ogdbpa9qyzqktf7yes2z03qetl9431da82rks7ro9wqajt4lb93fiusrefsqiprba4jtia6gis06g1k3li9owb82q20ppnr1p9xqutodwpnlf1k0qc6ytzhv2b39xok6143wt0yx0x4667x2x18ndwd6xqysyj991bt06wh2uygk8tn3qim28jmose265stm5engdi2mm48j01nnitsprvhvgu13m2by8y4ves5vvlj7kjmb7 == \9\0\v\v\5\g\o\k\e\f\n\1\e\r\o\v\2\8\k\i\s\k\j\g\f\8\z\t\c\u\n\a\p\b\o\l\o\a\7\v\k\d\p\y\7\e\q\u\f\l\r\s\v\y\n\o\l\o\r\w\c\8\n\5\i\u\4\1\7\9\i\o\s\m\a\l\f\v\8\p\g\5\e\o\c\y\7\a\w\1\l\x\f\o\i\x\p\a\n\k\r\r\f\z\6\6\s\5\m\m\l\8\z\2\m\k\y\r\a\b\6\g\c\7\l\a\c\m\k\5\w\4\c\6\a\f\5\b\6\p\w\e\r\o\6\3\e\x\m\t\r\0\o\o\j\g\k\e\7\5\0\n\i\i\k\v\8\8\b\6\g\l\w\5\r\5\k\u\z\p\a\9\x\0\j\h\c\2\a\n\b\h\f\j\a\3\f\8\t\v\x\w\p\y\u\w\4\q\b\k\o\n\9\e\9\o\w\z\s\k\u\d\7\e\l\2\z\w\n\y\n\s\8\a\t\s\m\2\5\a\4\n\n\3\m\y\y\4\u\3\v\r\4\z\z\w\1\4\h\r\1\0\j\r\d\e\w\n\s\8\5\o\g\d\b\p\a\9\q\y\z\q\k\t\f\7\y\e\s\2\z\0\3\q\e\t\l\9\4\3\1\d\a\8\2\r\k\s\7\r\o\9\w\q\a\j\t\4\l\b\9\3\f\i\u\s\r\e\f\s\q\i\p\r\b\a\4\j\t\i\a\6\g\i\s\0\6\g\1\k\3\l\i\9\o\w\b\8\2\q\2\0\p\p\n\r\1\p\9\x\q\u\t\o\d\w\p\n\l\f\1\k\0\q\c\6\y\t\z\h\v\2\b\3\9\x\o\k\6\1\4\3\w\t\0\y\x\0\x\4\6\6\7\x\2\x\1\8\n\d\w\d\6\x\q\y\s\y\j\9\9\1\b\t\0\6\w\h\2\u\y\g\k\8\t\n\3\q\i\m\2\8\j\m\o\s\e\2\6\5\s\t\m\5\e\n\g\d\i\2\m\m\4\8\j\0\1\n\n\i\t\s\p\r\v\h\v\g\u\1\3\m\2\b\y\8\y\4\v\e\s\5\v\v\l\j\7\k\j\m\b\7 ]] 00:08:40.808 05:07:30 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:40.808 05:07:30 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:41.067 [2024-12-08 05:07:30.623780] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:41.067 [2024-12-08 05:07:30.623873] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70378 ] 00:08:41.067 [2024-12-08 05:07:30.759887] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.068 [2024-12-08 05:07:30.799057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.068  [2024-12-08T05:07:31.113Z] Copying: 512/512 [B] (average 500 kBps) 00:08:41.327 00:08:41.328 05:07:30 -- dd/posix.sh@93 -- # [[ 90vv5gokefn1erov28kiskjgf8ztcunapboloa7vkdpy7equflrsvynolorwc8n5iu4179iosmalfv8pg5eocy7aw1lxfoixpankrrfz66s5mml8z2mkyrab6gc7lacmk5w4c6af5b6pwero63exmtr0oojgke750niikv88b6glw5r5kuzpa9x0jhc2anbhfja3f8tvxwpyuw4qbkon9e9owzskud7el2zwnyns8atsm25a4nn3myy4u3vr4zzw14hr10jrdewns85ogdbpa9qyzqktf7yes2z03qetl9431da82rks7ro9wqajt4lb93fiusrefsqiprba4jtia6gis06g1k3li9owb82q20ppnr1p9xqutodwpnlf1k0qc6ytzhv2b39xok6143wt0yx0x4667x2x18ndwd6xqysyj991bt06wh2uygk8tn3qim28jmose265stm5engdi2mm48j01nnitsprvhvgu13m2by8y4ves5vvlj7kjmb7 == \9\0\v\v\5\g\o\k\e\f\n\1\e\r\o\v\2\8\k\i\s\k\j\g\f\8\z\t\c\u\n\a\p\b\o\l\o\a\7\v\k\d\p\y\7\e\q\u\f\l\r\s\v\y\n\o\l\o\r\w\c\8\n\5\i\u\4\1\7\9\i\o\s\m\a\l\f\v\8\p\g\5\e\o\c\y\7\a\w\1\l\x\f\o\i\x\p\a\n\k\r\r\f\z\6\6\s\5\m\m\l\8\z\2\m\k\y\r\a\b\6\g\c\7\l\a\c\m\k\5\w\4\c\6\a\f\5\b\6\p\w\e\r\o\6\3\e\x\m\t\r\0\o\o\j\g\k\e\7\5\0\n\i\i\k\v\8\8\b\6\g\l\w\5\r\5\k\u\z\p\a\9\x\0\j\h\c\2\a\n\b\h\f\j\a\3\f\8\t\v\x\w\p\y\u\w\4\q\b\k\o\n\9\e\9\o\w\z\s\k\u\d\7\e\l\2\z\w\n\y\n\s\8\a\t\s\m\2\5\a\4\n\n\3\m\y\y\4\u\3\v\r\4\z\z\w\1\4\h\r\1\0\j\r\d\e\w\n\s\8\5\o\g\d\b\p\a\9\q\y\z\q\k\t\f\7\y\e\s\2\z\0\3\q\e\t\l\9\4\3\1\d\a\8\2\r\k\s\7\r\o\9\w\q\a\j\t\4\l\b\9\3\f\i\u\s\r\e\f\s\q\i\p\r\b\a\4\j\t\i\a\6\g\i\s\0\6\g\1\k\3\l\i\9\o\w\b\8\2\q\2\0\p\p\n\r\1\p\9\x\q\u\t\o\d\w\p\n\l\f\1\k\0\q\c\6\y\t\z\h\v\2\b\3\9\x\o\k\6\1\4\3\w\t\0\y\x\0\x\4\6\6\7\x\2\x\1\8\n\d\w\d\6\x\q\y\s\y\j\9\9\1\b\t\0\6\w\h\2\u\y\g\k\8\t\n\3\q\i\m\2\8\j\m\o\s\e\2\6\5\s\t\m\5\e\n\g\d\i\2\m\m\4\8\j\0\1\n\n\i\t\s\p\r\v\h\v\g\u\1\3\m\2\b\y\8\y\4\v\e\s\5\v\v\l\j\7\k\j\m\b\7 ]] 00:08:41.328 05:07:30 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:41.328 05:07:30 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:41.328 [2024-12-08 05:07:31.033333] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:41.328 [2024-12-08 05:07:31.033425] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70385 ] 00:08:41.589 [2024-12-08 05:07:31.168294] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.589 [2024-12-08 05:07:31.206229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.589  [2024-12-08T05:07:31.644Z] Copying: 512/512 [B] (average 500 kBps) 00:08:41.858 00:08:41.858 05:07:31 -- dd/posix.sh@93 -- # [[ 90vv5gokefn1erov28kiskjgf8ztcunapboloa7vkdpy7equflrsvynolorwc8n5iu4179iosmalfv8pg5eocy7aw1lxfoixpankrrfz66s5mml8z2mkyrab6gc7lacmk5w4c6af5b6pwero63exmtr0oojgke750niikv88b6glw5r5kuzpa9x0jhc2anbhfja3f8tvxwpyuw4qbkon9e9owzskud7el2zwnyns8atsm25a4nn3myy4u3vr4zzw14hr10jrdewns85ogdbpa9qyzqktf7yes2z03qetl9431da82rks7ro9wqajt4lb93fiusrefsqiprba4jtia6gis06g1k3li9owb82q20ppnr1p9xqutodwpnlf1k0qc6ytzhv2b39xok6143wt0yx0x4667x2x18ndwd6xqysyj991bt06wh2uygk8tn3qim28jmose265stm5engdi2mm48j01nnitsprvhvgu13m2by8y4ves5vvlj7kjmb7 == \9\0\v\v\5\g\o\k\e\f\n\1\e\r\o\v\2\8\k\i\s\k\j\g\f\8\z\t\c\u\n\a\p\b\o\l\o\a\7\v\k\d\p\y\7\e\q\u\f\l\r\s\v\y\n\o\l\o\r\w\c\8\n\5\i\u\4\1\7\9\i\o\s\m\a\l\f\v\8\p\g\5\e\o\c\y\7\a\w\1\l\x\f\o\i\x\p\a\n\k\r\r\f\z\6\6\s\5\m\m\l\8\z\2\m\k\y\r\a\b\6\g\c\7\l\a\c\m\k\5\w\4\c\6\a\f\5\b\6\p\w\e\r\o\6\3\e\x\m\t\r\0\o\o\j\g\k\e\7\5\0\n\i\i\k\v\8\8\b\6\g\l\w\5\r\5\k\u\z\p\a\9\x\0\j\h\c\2\a\n\b\h\f\j\a\3\f\8\t\v\x\w\p\y\u\w\4\q\b\k\o\n\9\e\9\o\w\z\s\k\u\d\7\e\l\2\z\w\n\y\n\s\8\a\t\s\m\2\5\a\4\n\n\3\m\y\y\4\u\3\v\r\4\z\z\w\1\4\h\r\1\0\j\r\d\e\w\n\s\8\5\o\g\d\b\p\a\9\q\y\z\q\k\t\f\7\y\e\s\2\z\0\3\q\e\t\l\9\4\3\1\d\a\8\2\r\k\s\7\r\o\9\w\q\a\j\t\4\l\b\9\3\f\i\u\s\r\e\f\s\q\i\p\r\b\a\4\j\t\i\a\6\g\i\s\0\6\g\1\k\3\l\i\9\o\w\b\8\2\q\2\0\p\p\n\r\1\p\9\x\q\u\t\o\d\w\p\n\l\f\1\k\0\q\c\6\y\t\z\h\v\2\b\3\9\x\o\k\6\1\4\3\w\t\0\y\x\0\x\4\6\6\7\x\2\x\1\8\n\d\w\d\6\x\q\y\s\y\j\9\9\1\b\t\0\6\w\h\2\u\y\g\k\8\t\n\3\q\i\m\2\8\j\m\o\s\e\2\6\5\s\t\m\5\e\n\g\d\i\2\m\m\4\8\j\0\1\n\n\i\t\s\p\r\v\h\v\g\u\1\3\m\2\b\y\8\y\4\v\e\s\5\v\v\l\j\7\k\j\m\b\7 ]] 00:08:41.858 00:08:41.858 real 0m3.461s 00:08:41.858 user 0m1.692s 00:08:41.858 sys 0m0.769s 00:08:41.858 05:07:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:41.858 ************************************ 00:08:41.858 END TEST dd_flags_misc 00:08:41.858 ************************************ 00:08:41.858 05:07:31 -- common/autotest_common.sh@10 -- # set +x 00:08:41.858 05:07:31 -- dd/posix.sh@131 -- # tests_forced_aio 00:08:41.858 05:07:31 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:08:41.858 * Second test run, disabling liburing, forcing AIO 00:08:41.858 05:07:31 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:08:41.858 05:07:31 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:08:41.858 05:07:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:41.858 05:07:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:41.858 05:07:31 -- common/autotest_common.sh@10 -- # set +x 00:08:41.858 ************************************ 00:08:41.858 START TEST dd_flag_append_forced_aio 00:08:41.858 ************************************ 00:08:41.858 05:07:31 -- common/autotest_common.sh@1114 -- # append 00:08:41.858 05:07:31 -- dd/posix.sh@16 -- # local dump0 00:08:41.858 05:07:31 -- dd/posix.sh@17 -- # local dump1 00:08:41.858 05:07:31 -- dd/posix.sh@19 -- # gen_bytes 32 00:08:41.858 05:07:31 -- dd/common.sh@98 -- # xtrace_disable 00:08:41.858 05:07:31 -- common/autotest_common.sh@10 -- # set +x 00:08:41.858 05:07:31 -- dd/posix.sh@19 -- # dump0=57cka5k9k6mtuby2czevyk5y06u6q74r 00:08:41.858 05:07:31 -- dd/posix.sh@20 -- # gen_bytes 32 00:08:41.858 05:07:31 -- dd/common.sh@98 -- # xtrace_disable 00:08:41.858 05:07:31 -- common/autotest_common.sh@10 -- # set +x 00:08:41.858 05:07:31 -- dd/posix.sh@20 -- # dump1=l2jr7epjcqx0zy9rbwbhkxoy2vqnwccq 00:08:41.858 05:07:31 -- dd/posix.sh@22 -- # printf %s 57cka5k9k6mtuby2czevyk5y06u6q74r 00:08:41.858 05:07:31 -- dd/posix.sh@23 -- # printf %s l2jr7epjcqx0zy9rbwbhkxoy2vqnwccq 00:08:41.858 05:07:31 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:08:41.858 [2024-12-08 05:07:31.563716] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:41.858 [2024-12-08 05:07:31.563833] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70412 ] 00:08:42.118 [2024-12-08 05:07:31.704083] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.118 [2024-12-08 05:07:31.740086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.118  [2024-12-08T05:07:32.163Z] Copying: 32/32 [B] (average 31 kBps) 00:08:42.377 00:08:42.377 ************************************ 00:08:42.377 END TEST dd_flag_append_forced_aio 00:08:42.377 ************************************ 00:08:42.378 05:07:31 -- dd/posix.sh@27 -- # [[ l2jr7epjcqx0zy9rbwbhkxoy2vqnwccq57cka5k9k6mtuby2czevyk5y06u6q74r == \l\2\j\r\7\e\p\j\c\q\x\0\z\y\9\r\b\w\b\h\k\x\o\y\2\v\q\n\w\c\c\q\5\7\c\k\a\5\k\9\k\6\m\t\u\b\y\2\c\z\e\v\y\k\5\y\0\6\u\6\q\7\4\r ]] 00:08:42.378 00:08:42.378 real 0m0.459s 00:08:42.378 user 0m0.237s 00:08:42.378 sys 0m0.102s 00:08:42.378 05:07:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:42.378 05:07:31 -- common/autotest_common.sh@10 -- # set +x 00:08:42.378 05:07:32 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:08:42.378 05:07:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:42.378 05:07:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:42.378 05:07:32 -- common/autotest_common.sh@10 -- # set +x 00:08:42.378 ************************************ 00:08:42.378 START TEST dd_flag_directory_forced_aio 00:08:42.378 ************************************ 00:08:42.378 05:07:32 -- common/autotest_common.sh@1114 -- # directory 00:08:42.378 05:07:32 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:42.378 05:07:32 -- common/autotest_common.sh@650 -- # local es=0 00:08:42.378 05:07:32 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:42.378 05:07:32 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.378 05:07:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:42.378 05:07:32 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.378 05:07:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:42.378 05:07:32 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.378 05:07:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:42.378 05:07:32 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.378 05:07:32 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:42.378 05:07:32 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:42.378 [2024-12-08 05:07:32.071250] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:42.378 [2024-12-08 05:07:32.071353] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70438 ] 00:08:42.638 [2024-12-08 05:07:32.214022] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.638 [2024-12-08 05:07:32.257788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.638 [2024-12-08 05:07:32.312823] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:42.638 [2024-12-08 05:07:32.313185] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:42.638 [2024-12-08 05:07:32.313216] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:42.638 [2024-12-08 05:07:32.384689] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:42.898 05:07:32 -- common/autotest_common.sh@653 -- # es=236 00:08:42.898 05:07:32 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:42.898 05:07:32 -- common/autotest_common.sh@662 -- # es=108 00:08:42.898 05:07:32 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:42.898 05:07:32 -- common/autotest_common.sh@670 -- # es=1 00:08:42.898 05:07:32 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:42.898 05:07:32 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:42.898 05:07:32 -- common/autotest_common.sh@650 -- # local es=0 00:08:42.898 05:07:32 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:42.898 05:07:32 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.898 05:07:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:42.898 05:07:32 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.898 05:07:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:42.898 05:07:32 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.898 05:07:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:42.898 05:07:32 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.898 05:07:32 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:42.898 05:07:32 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:42.898 [2024-12-08 05:07:32.508294] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:42.898 [2024-12-08 05:07:32.508659] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70448 ] 00:08:42.898 [2024-12-08 05:07:32.647749] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.157 [2024-12-08 05:07:32.682406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.157 [2024-12-08 05:07:32.727942] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:43.157 [2024-12-08 05:07:32.728026] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:43.157 [2024-12-08 05:07:32.728054] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:43.157 [2024-12-08 05:07:32.790815] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:43.157 05:07:32 -- common/autotest_common.sh@653 -- # es=236 00:08:43.157 05:07:32 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:43.157 05:07:32 -- common/autotest_common.sh@662 -- # es=108 00:08:43.157 ************************************ 00:08:43.157 END TEST dd_flag_directory_forced_aio 00:08:43.157 ************************************ 00:08:43.157 05:07:32 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:43.157 05:07:32 -- common/autotest_common.sh@670 -- # es=1 00:08:43.157 05:07:32 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:43.157 00:08:43.157 real 0m0.864s 00:08:43.157 user 0m0.445s 00:08:43.157 sys 0m0.208s 00:08:43.157 05:07:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:43.157 05:07:32 -- common/autotest_common.sh@10 -- # set +x 00:08:43.157 05:07:32 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:08:43.157 05:07:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:43.157 05:07:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:43.157 05:07:32 -- common/autotest_common.sh@10 -- # set +x 00:08:43.157 ************************************ 00:08:43.157 START TEST dd_flag_nofollow_forced_aio 00:08:43.157 ************************************ 00:08:43.157 05:07:32 -- common/autotest_common.sh@1114 -- # nofollow 00:08:43.157 05:07:32 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:43.157 05:07:32 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:43.157 05:07:32 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:43.157 05:07:32 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:43.157 05:07:32 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:43.157 05:07:32 -- common/autotest_common.sh@650 -- # local es=0 00:08:43.157 05:07:32 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:43.157 05:07:32 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:43.416 05:07:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:43.416 05:07:32 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:43.416 05:07:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:43.416 05:07:32 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:43.416 05:07:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:43.416 05:07:32 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:43.416 05:07:32 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:43.416 05:07:32 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:43.416 [2024-12-08 05:07:32.997075] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:43.416 [2024-12-08 05:07:32.997173] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70476 ] 00:08:43.416 [2024-12-08 05:07:33.134056] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.416 [2024-12-08 05:07:33.168205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.675 [2024-12-08 05:07:33.213543] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:43.675 [2024-12-08 05:07:33.213602] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:43.675 [2024-12-08 05:07:33.213617] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:43.675 [2024-12-08 05:07:33.275768] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:43.675 05:07:33 -- common/autotest_common.sh@653 -- # es=216 00:08:43.675 05:07:33 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:43.675 05:07:33 -- common/autotest_common.sh@662 -- # es=88 00:08:43.675 05:07:33 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:43.675 05:07:33 -- common/autotest_common.sh@670 -- # es=1 00:08:43.675 05:07:33 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:43.675 05:07:33 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:43.675 05:07:33 -- common/autotest_common.sh@650 -- # local es=0 00:08:43.675 05:07:33 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:43.675 05:07:33 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:43.675 05:07:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:43.675 05:07:33 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:43.675 05:07:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:43.675 05:07:33 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:43.675 05:07:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:43.675 05:07:33 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:43.675 05:07:33 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:43.675 05:07:33 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:43.675 [2024-12-08 05:07:33.397446] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:43.675 [2024-12-08 05:07:33.397740] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70480 ] 00:08:43.934 [2024-12-08 05:07:33.536429] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.934 [2024-12-08 05:07:33.571212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.934 [2024-12-08 05:07:33.617832] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:43.934 [2024-12-08 05:07:33.617887] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:43.934 [2024-12-08 05:07:33.617917] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:43.934 [2024-12-08 05:07:33.680363] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:44.193 05:07:33 -- common/autotest_common.sh@653 -- # es=216 00:08:44.193 05:07:33 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:44.193 05:07:33 -- common/autotest_common.sh@662 -- # es=88 00:08:44.193 05:07:33 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:44.193 05:07:33 -- common/autotest_common.sh@670 -- # es=1 00:08:44.193 05:07:33 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:44.193 05:07:33 -- dd/posix.sh@46 -- # gen_bytes 512 00:08:44.193 05:07:33 -- dd/common.sh@98 -- # xtrace_disable 00:08:44.193 05:07:33 -- common/autotest_common.sh@10 -- # set +x 00:08:44.193 05:07:33 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:44.193 [2024-12-08 05:07:33.802313] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:44.193 [2024-12-08 05:07:33.802544] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70488 ] 00:08:44.193 [2024-12-08 05:07:33.939253] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.193 [2024-12-08 05:07:33.975943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.452  [2024-12-08T05:07:34.238Z] Copying: 512/512 [B] (average 500 kBps) 00:08:44.452 00:08:44.452 05:07:34 -- dd/posix.sh@49 -- # [[ 1rqp8xzikg4w0ltnrs7i7qy90btp8zbsd7i6gbuooswwy3i5iplvbjaol9wflxe3vb0jgre738ukubmdc98pfbnmkscqhll0kam3bv39g1rtaaykocyzplch2bk94ps2mykmkhfmkn88rhhpi0q6er8682mh4jr9frlza2ymxhya0dat03c72gfxbhaj62mxxaaufazpwm6qrw9e0qbqg2mbvty5084j3kqk9gwwfeqm6kupj2w2qnevczettxhai1i2lfhcwfwmhkplrjqjp9cfaxseoxoxqzuwlel1aowi0q97qfy1ecdlir7ir0fzupu9vlp84lmpk0bvlxmlktqw81xs6mju5u3huui1bat8nzjxmcrxocszw96lhqmxba0i4fijq8jhhcp00pxy1o4cddrmnz0j1gdfqe9vs28jc89zbs4zkf97fb4pm2zmwa11579bytrfzha236qcuduuf264nje8bacupgp62va3mhctk9b4ajz8wscbcgk3 == \1\r\q\p\8\x\z\i\k\g\4\w\0\l\t\n\r\s\7\i\7\q\y\9\0\b\t\p\8\z\b\s\d\7\i\6\g\b\u\o\o\s\w\w\y\3\i\5\i\p\l\v\b\j\a\o\l\9\w\f\l\x\e\3\v\b\0\j\g\r\e\7\3\8\u\k\u\b\m\d\c\9\8\p\f\b\n\m\k\s\c\q\h\l\l\0\k\a\m\3\b\v\3\9\g\1\r\t\a\a\y\k\o\c\y\z\p\l\c\h\2\b\k\9\4\p\s\2\m\y\k\m\k\h\f\m\k\n\8\8\r\h\h\p\i\0\q\6\e\r\8\6\8\2\m\h\4\j\r\9\f\r\l\z\a\2\y\m\x\h\y\a\0\d\a\t\0\3\c\7\2\g\f\x\b\h\a\j\6\2\m\x\x\a\a\u\f\a\z\p\w\m\6\q\r\w\9\e\0\q\b\q\g\2\m\b\v\t\y\5\0\8\4\j\3\k\q\k\9\g\w\w\f\e\q\m\6\k\u\p\j\2\w\2\q\n\e\v\c\z\e\t\t\x\h\a\i\1\i\2\l\f\h\c\w\f\w\m\h\k\p\l\r\j\q\j\p\9\c\f\a\x\s\e\o\x\o\x\q\z\u\w\l\e\l\1\a\o\w\i\0\q\9\7\q\f\y\1\e\c\d\l\i\r\7\i\r\0\f\z\u\p\u\9\v\l\p\8\4\l\m\p\k\0\b\v\l\x\m\l\k\t\q\w\8\1\x\s\6\m\j\u\5\u\3\h\u\u\i\1\b\a\t\8\n\z\j\x\m\c\r\x\o\c\s\z\w\9\6\l\h\q\m\x\b\a\0\i\4\f\i\j\q\8\j\h\h\c\p\0\0\p\x\y\1\o\4\c\d\d\r\m\n\z\0\j\1\g\d\f\q\e\9\v\s\2\8\j\c\8\9\z\b\s\4\z\k\f\9\7\f\b\4\p\m\2\z\m\w\a\1\1\5\7\9\b\y\t\r\f\z\h\a\2\3\6\q\c\u\d\u\u\f\2\6\4\n\j\e\8\b\a\c\u\p\g\p\6\2\v\a\3\m\h\c\t\k\9\b\4\a\j\z\8\w\s\c\b\c\g\k\3 ]] 00:08:44.452 00:08:44.452 real 0m1.245s 00:08:44.452 user 0m0.618s 00:08:44.452 sys 0m0.293s 00:08:44.452 05:07:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:44.452 ************************************ 00:08:44.452 END TEST dd_flag_nofollow_forced_aio 00:08:44.452 ************************************ 00:08:44.452 05:07:34 -- common/autotest_common.sh@10 -- # set +x 00:08:44.452 05:07:34 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:08:44.452 05:07:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:44.452 05:07:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:44.452 05:07:34 -- common/autotest_common.sh@10 -- # set +x 00:08:44.452 ************************************ 00:08:44.452 START TEST dd_flag_noatime_forced_aio 00:08:44.452 ************************************ 00:08:44.452 05:07:34 -- common/autotest_common.sh@1114 -- # noatime 00:08:44.452 05:07:34 -- dd/posix.sh@53 -- # local atime_if 00:08:44.452 05:07:34 -- dd/posix.sh@54 -- # local atime_of 00:08:44.452 05:07:34 -- dd/posix.sh@58 -- # gen_bytes 512 00:08:44.452 05:07:34 -- dd/common.sh@98 -- # xtrace_disable 00:08:44.452 05:07:34 -- common/autotest_common.sh@10 -- # set +x 00:08:44.710 05:07:34 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:44.710 05:07:34 -- dd/posix.sh@60 -- # atime_if=1733634454 00:08:44.710 05:07:34 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:44.710 05:07:34 -- dd/posix.sh@61 -- # atime_of=1733634454 00:08:44.710 05:07:34 -- dd/posix.sh@66 -- # sleep 1 00:08:45.646 05:07:35 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:45.646 [2024-12-08 05:07:35.328771] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:45.647 [2024-12-08 05:07:35.328980] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70528 ] 00:08:45.906 [2024-12-08 05:07:35.476650] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.906 [2024-12-08 05:07:35.519955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.906  [2024-12-08T05:07:35.952Z] Copying: 512/512 [B] (average 500 kBps) 00:08:46.166 00:08:46.166 05:07:35 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:46.166 05:07:35 -- dd/posix.sh@69 -- # (( atime_if == 1733634454 )) 00:08:46.166 05:07:35 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:46.166 05:07:35 -- dd/posix.sh@70 -- # (( atime_of == 1733634454 )) 00:08:46.166 05:07:35 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:46.166 [2024-12-08 05:07:35.797453] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:46.166 [2024-12-08 05:07:35.797570] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70540 ] 00:08:46.166 [2024-12-08 05:07:35.943046] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.428 [2024-12-08 05:07:36.050858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.428  [2024-12-08T05:07:36.472Z] Copying: 512/512 [B] (average 500 kBps) 00:08:46.686 00:08:46.686 05:07:36 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:46.686 05:07:36 -- dd/posix.sh@73 -- # (( atime_if < 1733634456 )) 00:08:46.686 ************************************ 00:08:46.686 END TEST dd_flag_noatime_forced_aio 00:08:46.686 ************************************ 00:08:46.686 00:08:46.686 real 0m2.129s 00:08:46.686 user 0m0.535s 00:08:46.686 sys 0m0.345s 00:08:46.686 05:07:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:46.686 05:07:36 -- common/autotest_common.sh@10 -- # set +x 00:08:46.686 05:07:36 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:08:46.686 05:07:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:46.686 05:07:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:46.686 05:07:36 -- common/autotest_common.sh@10 -- # set +x 00:08:46.686 ************************************ 00:08:46.686 START TEST dd_flags_misc_forced_aio 00:08:46.686 ************************************ 00:08:46.686 05:07:36 -- common/autotest_common.sh@1114 -- # io 00:08:46.686 05:07:36 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:46.686 05:07:36 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:46.686 05:07:36 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:46.686 05:07:36 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:46.686 05:07:36 -- dd/posix.sh@86 -- # gen_bytes 512 00:08:46.686 05:07:36 -- dd/common.sh@98 -- # xtrace_disable 00:08:46.686 05:07:36 -- common/autotest_common.sh@10 -- # set +x 00:08:46.686 05:07:36 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:46.686 05:07:36 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:46.944 [2024-12-08 05:07:36.470896] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:46.944 [2024-12-08 05:07:36.470978] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70566 ] 00:08:46.944 [2024-12-08 05:07:36.605329] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.944 [2024-12-08 05:07:36.643617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.944  [2024-12-08T05:07:36.988Z] Copying: 512/512 [B] (average 500 kBps) 00:08:47.202 00:08:47.202 05:07:36 -- dd/posix.sh@93 -- # [[ wkmou2fo086ayv13ex4j5pyg42sgqbvdbpirnl5ylw9ul1j17zhmmlglmek4j6hb1uerejjhmm5irnfp6bw5u346c3x4cmxs5bod15gxg9xdwc92j4k1bd1wm7veqpufgt0lqytm42sprfqjfjf34liupzzb48qkfimu40you7z2jvp9knxye0ys87jqta4sa4lymsmy5guxhtghfqggl2z96l6jfdhd24z1un1734ecutxiozwa86ruo8pg8wliepzjfrxpnvcni3j2drll4sc5rbvbgomhynym79hdmd3ue8q12ry6sz3lotfr62tayq09tb78ffnh6vnadltnsvbjdsjfqu618jpafa7xzrvf0lhoglkqmq6q1tgk3somnt4vi0w068gm31dkidgpnfrqvkvf0rc1oxvpcblfp5y875a9a0vog9e7isvumw7dabwcifke0ld8tmtxj60d5do470kc5is2ds90ya4dmojymfvhklgot2byx4k11oyb == \w\k\m\o\u\2\f\o\0\8\6\a\y\v\1\3\e\x\4\j\5\p\y\g\4\2\s\g\q\b\v\d\b\p\i\r\n\l\5\y\l\w\9\u\l\1\j\1\7\z\h\m\m\l\g\l\m\e\k\4\j\6\h\b\1\u\e\r\e\j\j\h\m\m\5\i\r\n\f\p\6\b\w\5\u\3\4\6\c\3\x\4\c\m\x\s\5\b\o\d\1\5\g\x\g\9\x\d\w\c\9\2\j\4\k\1\b\d\1\w\m\7\v\e\q\p\u\f\g\t\0\l\q\y\t\m\4\2\s\p\r\f\q\j\f\j\f\3\4\l\i\u\p\z\z\b\4\8\q\k\f\i\m\u\4\0\y\o\u\7\z\2\j\v\p\9\k\n\x\y\e\0\y\s\8\7\j\q\t\a\4\s\a\4\l\y\m\s\m\y\5\g\u\x\h\t\g\h\f\q\g\g\l\2\z\9\6\l\6\j\f\d\h\d\2\4\z\1\u\n\1\7\3\4\e\c\u\t\x\i\o\z\w\a\8\6\r\u\o\8\p\g\8\w\l\i\e\p\z\j\f\r\x\p\n\v\c\n\i\3\j\2\d\r\l\l\4\s\c\5\r\b\v\b\g\o\m\h\y\n\y\m\7\9\h\d\m\d\3\u\e\8\q\1\2\r\y\6\s\z\3\l\o\t\f\r\6\2\t\a\y\q\0\9\t\b\7\8\f\f\n\h\6\v\n\a\d\l\t\n\s\v\b\j\d\s\j\f\q\u\6\1\8\j\p\a\f\a\7\x\z\r\v\f\0\l\h\o\g\l\k\q\m\q\6\q\1\t\g\k\3\s\o\m\n\t\4\v\i\0\w\0\6\8\g\m\3\1\d\k\i\d\g\p\n\f\r\q\v\k\v\f\0\r\c\1\o\x\v\p\c\b\l\f\p\5\y\8\7\5\a\9\a\0\v\o\g\9\e\7\i\s\v\u\m\w\7\d\a\b\w\c\i\f\k\e\0\l\d\8\t\m\t\x\j\6\0\d\5\d\o\4\7\0\k\c\5\i\s\2\d\s\9\0\y\a\4\d\m\o\j\y\m\f\v\h\k\l\g\o\t\2\b\y\x\4\k\1\1\o\y\b ]] 00:08:47.202 05:07:36 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:47.202 05:07:36 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:47.202 [2024-12-08 05:07:36.908728] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:47.202 [2024-12-08 05:07:36.908859] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70574 ] 00:08:47.459 [2024-12-08 05:07:37.046080] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.459 [2024-12-08 05:07:37.083357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.459  [2024-12-08T05:07:37.503Z] Copying: 512/512 [B] (average 500 kBps) 00:08:47.717 00:08:47.717 05:07:37 -- dd/posix.sh@93 -- # [[ wkmou2fo086ayv13ex4j5pyg42sgqbvdbpirnl5ylw9ul1j17zhmmlglmek4j6hb1uerejjhmm5irnfp6bw5u346c3x4cmxs5bod15gxg9xdwc92j4k1bd1wm7veqpufgt0lqytm42sprfqjfjf34liupzzb48qkfimu40you7z2jvp9knxye0ys87jqta4sa4lymsmy5guxhtghfqggl2z96l6jfdhd24z1un1734ecutxiozwa86ruo8pg8wliepzjfrxpnvcni3j2drll4sc5rbvbgomhynym79hdmd3ue8q12ry6sz3lotfr62tayq09tb78ffnh6vnadltnsvbjdsjfqu618jpafa7xzrvf0lhoglkqmq6q1tgk3somnt4vi0w068gm31dkidgpnfrqvkvf0rc1oxvpcblfp5y875a9a0vog9e7isvumw7dabwcifke0ld8tmtxj60d5do470kc5is2ds90ya4dmojymfvhklgot2byx4k11oyb == \w\k\m\o\u\2\f\o\0\8\6\a\y\v\1\3\e\x\4\j\5\p\y\g\4\2\s\g\q\b\v\d\b\p\i\r\n\l\5\y\l\w\9\u\l\1\j\1\7\z\h\m\m\l\g\l\m\e\k\4\j\6\h\b\1\u\e\r\e\j\j\h\m\m\5\i\r\n\f\p\6\b\w\5\u\3\4\6\c\3\x\4\c\m\x\s\5\b\o\d\1\5\g\x\g\9\x\d\w\c\9\2\j\4\k\1\b\d\1\w\m\7\v\e\q\p\u\f\g\t\0\l\q\y\t\m\4\2\s\p\r\f\q\j\f\j\f\3\4\l\i\u\p\z\z\b\4\8\q\k\f\i\m\u\4\0\y\o\u\7\z\2\j\v\p\9\k\n\x\y\e\0\y\s\8\7\j\q\t\a\4\s\a\4\l\y\m\s\m\y\5\g\u\x\h\t\g\h\f\q\g\g\l\2\z\9\6\l\6\j\f\d\h\d\2\4\z\1\u\n\1\7\3\4\e\c\u\t\x\i\o\z\w\a\8\6\r\u\o\8\p\g\8\w\l\i\e\p\z\j\f\r\x\p\n\v\c\n\i\3\j\2\d\r\l\l\4\s\c\5\r\b\v\b\g\o\m\h\y\n\y\m\7\9\h\d\m\d\3\u\e\8\q\1\2\r\y\6\s\z\3\l\o\t\f\r\6\2\t\a\y\q\0\9\t\b\7\8\f\f\n\h\6\v\n\a\d\l\t\n\s\v\b\j\d\s\j\f\q\u\6\1\8\j\p\a\f\a\7\x\z\r\v\f\0\l\h\o\g\l\k\q\m\q\6\q\1\t\g\k\3\s\o\m\n\t\4\v\i\0\w\0\6\8\g\m\3\1\d\k\i\d\g\p\n\f\r\q\v\k\v\f\0\r\c\1\o\x\v\p\c\b\l\f\p\5\y\8\7\5\a\9\a\0\v\o\g\9\e\7\i\s\v\u\m\w\7\d\a\b\w\c\i\f\k\e\0\l\d\8\t\m\t\x\j\6\0\d\5\d\o\4\7\0\k\c\5\i\s\2\d\s\9\0\y\a\4\d\m\o\j\y\m\f\v\h\k\l\g\o\t\2\b\y\x\4\k\1\1\o\y\b ]] 00:08:47.717 05:07:37 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:47.717 05:07:37 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:47.717 [2024-12-08 05:07:37.338886] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:47.718 [2024-12-08 05:07:37.338972] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70581 ] 00:08:47.718 [2024-12-08 05:07:37.475708] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.976 [2024-12-08 05:07:37.514771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.976  [2024-12-08T05:07:37.762Z] Copying: 512/512 [B] (average 500 kBps) 00:08:47.976 00:08:47.976 05:07:37 -- dd/posix.sh@93 -- # [[ wkmou2fo086ayv13ex4j5pyg42sgqbvdbpirnl5ylw9ul1j17zhmmlglmek4j6hb1uerejjhmm5irnfp6bw5u346c3x4cmxs5bod15gxg9xdwc92j4k1bd1wm7veqpufgt0lqytm42sprfqjfjf34liupzzb48qkfimu40you7z2jvp9knxye0ys87jqta4sa4lymsmy5guxhtghfqggl2z96l6jfdhd24z1un1734ecutxiozwa86ruo8pg8wliepzjfrxpnvcni3j2drll4sc5rbvbgomhynym79hdmd3ue8q12ry6sz3lotfr62tayq09tb78ffnh6vnadltnsvbjdsjfqu618jpafa7xzrvf0lhoglkqmq6q1tgk3somnt4vi0w068gm31dkidgpnfrqvkvf0rc1oxvpcblfp5y875a9a0vog9e7isvumw7dabwcifke0ld8tmtxj60d5do470kc5is2ds90ya4dmojymfvhklgot2byx4k11oyb == \w\k\m\o\u\2\f\o\0\8\6\a\y\v\1\3\e\x\4\j\5\p\y\g\4\2\s\g\q\b\v\d\b\p\i\r\n\l\5\y\l\w\9\u\l\1\j\1\7\z\h\m\m\l\g\l\m\e\k\4\j\6\h\b\1\u\e\r\e\j\j\h\m\m\5\i\r\n\f\p\6\b\w\5\u\3\4\6\c\3\x\4\c\m\x\s\5\b\o\d\1\5\g\x\g\9\x\d\w\c\9\2\j\4\k\1\b\d\1\w\m\7\v\e\q\p\u\f\g\t\0\l\q\y\t\m\4\2\s\p\r\f\q\j\f\j\f\3\4\l\i\u\p\z\z\b\4\8\q\k\f\i\m\u\4\0\y\o\u\7\z\2\j\v\p\9\k\n\x\y\e\0\y\s\8\7\j\q\t\a\4\s\a\4\l\y\m\s\m\y\5\g\u\x\h\t\g\h\f\q\g\g\l\2\z\9\6\l\6\j\f\d\h\d\2\4\z\1\u\n\1\7\3\4\e\c\u\t\x\i\o\z\w\a\8\6\r\u\o\8\p\g\8\w\l\i\e\p\z\j\f\r\x\p\n\v\c\n\i\3\j\2\d\r\l\l\4\s\c\5\r\b\v\b\g\o\m\h\y\n\y\m\7\9\h\d\m\d\3\u\e\8\q\1\2\r\y\6\s\z\3\l\o\t\f\r\6\2\t\a\y\q\0\9\t\b\7\8\f\f\n\h\6\v\n\a\d\l\t\n\s\v\b\j\d\s\j\f\q\u\6\1\8\j\p\a\f\a\7\x\z\r\v\f\0\l\h\o\g\l\k\q\m\q\6\q\1\t\g\k\3\s\o\m\n\t\4\v\i\0\w\0\6\8\g\m\3\1\d\k\i\d\g\p\n\f\r\q\v\k\v\f\0\r\c\1\o\x\v\p\c\b\l\f\p\5\y\8\7\5\a\9\a\0\v\o\g\9\e\7\i\s\v\u\m\w\7\d\a\b\w\c\i\f\k\e\0\l\d\8\t\m\t\x\j\6\0\d\5\d\o\4\7\0\k\c\5\i\s\2\d\s\9\0\y\a\4\d\m\o\j\y\m\f\v\h\k\l\g\o\t\2\b\y\x\4\k\1\1\o\y\b ]] 00:08:47.976 05:07:37 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:47.976 05:07:37 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:48.234 [2024-12-08 05:07:37.773348] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:48.234 [2024-12-08 05:07:37.773453] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70589 ] 00:08:48.234 [2024-12-08 05:07:37.908793] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.234 [2024-12-08 05:07:37.947392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.234  [2024-12-08T05:07:38.278Z] Copying: 512/512 [B] (average 500 kBps) 00:08:48.492 00:08:48.492 05:07:38 -- dd/posix.sh@93 -- # [[ wkmou2fo086ayv13ex4j5pyg42sgqbvdbpirnl5ylw9ul1j17zhmmlglmek4j6hb1uerejjhmm5irnfp6bw5u346c3x4cmxs5bod15gxg9xdwc92j4k1bd1wm7veqpufgt0lqytm42sprfqjfjf34liupzzb48qkfimu40you7z2jvp9knxye0ys87jqta4sa4lymsmy5guxhtghfqggl2z96l6jfdhd24z1un1734ecutxiozwa86ruo8pg8wliepzjfrxpnvcni3j2drll4sc5rbvbgomhynym79hdmd3ue8q12ry6sz3lotfr62tayq09tb78ffnh6vnadltnsvbjdsjfqu618jpafa7xzrvf0lhoglkqmq6q1tgk3somnt4vi0w068gm31dkidgpnfrqvkvf0rc1oxvpcblfp5y875a9a0vog9e7isvumw7dabwcifke0ld8tmtxj60d5do470kc5is2ds90ya4dmojymfvhklgot2byx4k11oyb == \w\k\m\o\u\2\f\o\0\8\6\a\y\v\1\3\e\x\4\j\5\p\y\g\4\2\s\g\q\b\v\d\b\p\i\r\n\l\5\y\l\w\9\u\l\1\j\1\7\z\h\m\m\l\g\l\m\e\k\4\j\6\h\b\1\u\e\r\e\j\j\h\m\m\5\i\r\n\f\p\6\b\w\5\u\3\4\6\c\3\x\4\c\m\x\s\5\b\o\d\1\5\g\x\g\9\x\d\w\c\9\2\j\4\k\1\b\d\1\w\m\7\v\e\q\p\u\f\g\t\0\l\q\y\t\m\4\2\s\p\r\f\q\j\f\j\f\3\4\l\i\u\p\z\z\b\4\8\q\k\f\i\m\u\4\0\y\o\u\7\z\2\j\v\p\9\k\n\x\y\e\0\y\s\8\7\j\q\t\a\4\s\a\4\l\y\m\s\m\y\5\g\u\x\h\t\g\h\f\q\g\g\l\2\z\9\6\l\6\j\f\d\h\d\2\4\z\1\u\n\1\7\3\4\e\c\u\t\x\i\o\z\w\a\8\6\r\u\o\8\p\g\8\w\l\i\e\p\z\j\f\r\x\p\n\v\c\n\i\3\j\2\d\r\l\l\4\s\c\5\r\b\v\b\g\o\m\h\y\n\y\m\7\9\h\d\m\d\3\u\e\8\q\1\2\r\y\6\s\z\3\l\o\t\f\r\6\2\t\a\y\q\0\9\t\b\7\8\f\f\n\h\6\v\n\a\d\l\t\n\s\v\b\j\d\s\j\f\q\u\6\1\8\j\p\a\f\a\7\x\z\r\v\f\0\l\h\o\g\l\k\q\m\q\6\q\1\t\g\k\3\s\o\m\n\t\4\v\i\0\w\0\6\8\g\m\3\1\d\k\i\d\g\p\n\f\r\q\v\k\v\f\0\r\c\1\o\x\v\p\c\b\l\f\p\5\y\8\7\5\a\9\a\0\v\o\g\9\e\7\i\s\v\u\m\w\7\d\a\b\w\c\i\f\k\e\0\l\d\8\t\m\t\x\j\6\0\d\5\d\o\4\7\0\k\c\5\i\s\2\d\s\9\0\y\a\4\d\m\o\j\y\m\f\v\h\k\l\g\o\t\2\b\y\x\4\k\1\1\o\y\b ]] 00:08:48.492 05:07:38 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:48.492 05:07:38 -- dd/posix.sh@86 -- # gen_bytes 512 00:08:48.492 05:07:38 -- dd/common.sh@98 -- # xtrace_disable 00:08:48.492 05:07:38 -- common/autotest_common.sh@10 -- # set +x 00:08:48.492 05:07:38 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:48.492 05:07:38 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:48.492 [2024-12-08 05:07:38.227425] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:48.492 [2024-12-08 05:07:38.227748] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70591 ] 00:08:48.750 [2024-12-08 05:07:38.364226] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.750 [2024-12-08 05:07:38.404705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.750  [2024-12-08T05:07:38.795Z] Copying: 512/512 [B] (average 500 kBps) 00:08:49.009 00:08:49.009 05:07:38 -- dd/posix.sh@93 -- # [[ 4itglvi0itoywo17bv3mu5jgfic1lwkuytkg8tizc2k7v1pmd15uqm0x40318hmg9aq3pcuj3aw7yr3vzlqwlxrdfwasbssjwo0c9vv067n8a7f0ceh5uewsiklud7kxcp0b5qb2ytnmoiwcgrm4sdkbyz7a6b3gi18jxi0jxft5rnh3jq40x5bzqfvrcxeh5uqekppz87w4y5gorkwfokilmbrgzis1ccbhbggvtl9z6yfhdnjflfaatze0r9weymvzaejx90euol4c5r2hg267id3vdl5d4sd9hl0eigbh71obxthxmqwc4jcmyx0bkueqb5tpf80ucml2tco14matm2w31c6ufuw3agq126tykl5cabroj6zcvzd634cwe4qf1ilrveo5ij3blxjpkcwy1vv5l02gpqes8sqibefdmnwrzt83e2oafgcqx9by85inyxf4hs0b0e48p7sywon0y9skynbb8gkwk2d15z69uel1zcwegt6t46pc99eu == \4\i\t\g\l\v\i\0\i\t\o\y\w\o\1\7\b\v\3\m\u\5\j\g\f\i\c\1\l\w\k\u\y\t\k\g\8\t\i\z\c\2\k\7\v\1\p\m\d\1\5\u\q\m\0\x\4\0\3\1\8\h\m\g\9\a\q\3\p\c\u\j\3\a\w\7\y\r\3\v\z\l\q\w\l\x\r\d\f\w\a\s\b\s\s\j\w\o\0\c\9\v\v\0\6\7\n\8\a\7\f\0\c\e\h\5\u\e\w\s\i\k\l\u\d\7\k\x\c\p\0\b\5\q\b\2\y\t\n\m\o\i\w\c\g\r\m\4\s\d\k\b\y\z\7\a\6\b\3\g\i\1\8\j\x\i\0\j\x\f\t\5\r\n\h\3\j\q\4\0\x\5\b\z\q\f\v\r\c\x\e\h\5\u\q\e\k\p\p\z\8\7\w\4\y\5\g\o\r\k\w\f\o\k\i\l\m\b\r\g\z\i\s\1\c\c\b\h\b\g\g\v\t\l\9\z\6\y\f\h\d\n\j\f\l\f\a\a\t\z\e\0\r\9\w\e\y\m\v\z\a\e\j\x\9\0\e\u\o\l\4\c\5\r\2\h\g\2\6\7\i\d\3\v\d\l\5\d\4\s\d\9\h\l\0\e\i\g\b\h\7\1\o\b\x\t\h\x\m\q\w\c\4\j\c\m\y\x\0\b\k\u\e\q\b\5\t\p\f\8\0\u\c\m\l\2\t\c\o\1\4\m\a\t\m\2\w\3\1\c\6\u\f\u\w\3\a\g\q\1\2\6\t\y\k\l\5\c\a\b\r\o\j\6\z\c\v\z\d\6\3\4\c\w\e\4\q\f\1\i\l\r\v\e\o\5\i\j\3\b\l\x\j\p\k\c\w\y\1\v\v\5\l\0\2\g\p\q\e\s\8\s\q\i\b\e\f\d\m\n\w\r\z\t\8\3\e\2\o\a\f\g\c\q\x\9\b\y\8\5\i\n\y\x\f\4\h\s\0\b\0\e\4\8\p\7\s\y\w\o\n\0\y\9\s\k\y\n\b\b\8\g\k\w\k\2\d\1\5\z\6\9\u\e\l\1\z\c\w\e\g\t\6\t\4\6\p\c\9\9\e\u ]] 00:08:49.009 05:07:38 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:49.009 05:07:38 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:49.009 [2024-12-08 05:07:38.678997] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:49.009 [2024-12-08 05:07:38.679312] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70604 ] 00:08:49.268 [2024-12-08 05:07:38.814784] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.268 [2024-12-08 05:07:38.855089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.268  [2024-12-08T05:07:39.313Z] Copying: 512/512 [B] (average 500 kBps) 00:08:49.527 00:08:49.528 05:07:39 -- dd/posix.sh@93 -- # [[ 4itglvi0itoywo17bv3mu5jgfic1lwkuytkg8tizc2k7v1pmd15uqm0x40318hmg9aq3pcuj3aw7yr3vzlqwlxrdfwasbssjwo0c9vv067n8a7f0ceh5uewsiklud7kxcp0b5qb2ytnmoiwcgrm4sdkbyz7a6b3gi18jxi0jxft5rnh3jq40x5bzqfvrcxeh5uqekppz87w4y5gorkwfokilmbrgzis1ccbhbggvtl9z6yfhdnjflfaatze0r9weymvzaejx90euol4c5r2hg267id3vdl5d4sd9hl0eigbh71obxthxmqwc4jcmyx0bkueqb5tpf80ucml2tco14matm2w31c6ufuw3agq126tykl5cabroj6zcvzd634cwe4qf1ilrveo5ij3blxjpkcwy1vv5l02gpqes8sqibefdmnwrzt83e2oafgcqx9by85inyxf4hs0b0e48p7sywon0y9skynbb8gkwk2d15z69uel1zcwegt6t46pc99eu == \4\i\t\g\l\v\i\0\i\t\o\y\w\o\1\7\b\v\3\m\u\5\j\g\f\i\c\1\l\w\k\u\y\t\k\g\8\t\i\z\c\2\k\7\v\1\p\m\d\1\5\u\q\m\0\x\4\0\3\1\8\h\m\g\9\a\q\3\p\c\u\j\3\a\w\7\y\r\3\v\z\l\q\w\l\x\r\d\f\w\a\s\b\s\s\j\w\o\0\c\9\v\v\0\6\7\n\8\a\7\f\0\c\e\h\5\u\e\w\s\i\k\l\u\d\7\k\x\c\p\0\b\5\q\b\2\y\t\n\m\o\i\w\c\g\r\m\4\s\d\k\b\y\z\7\a\6\b\3\g\i\1\8\j\x\i\0\j\x\f\t\5\r\n\h\3\j\q\4\0\x\5\b\z\q\f\v\r\c\x\e\h\5\u\q\e\k\p\p\z\8\7\w\4\y\5\g\o\r\k\w\f\o\k\i\l\m\b\r\g\z\i\s\1\c\c\b\h\b\g\g\v\t\l\9\z\6\y\f\h\d\n\j\f\l\f\a\a\t\z\e\0\r\9\w\e\y\m\v\z\a\e\j\x\9\0\e\u\o\l\4\c\5\r\2\h\g\2\6\7\i\d\3\v\d\l\5\d\4\s\d\9\h\l\0\e\i\g\b\h\7\1\o\b\x\t\h\x\m\q\w\c\4\j\c\m\y\x\0\b\k\u\e\q\b\5\t\p\f\8\0\u\c\m\l\2\t\c\o\1\4\m\a\t\m\2\w\3\1\c\6\u\f\u\w\3\a\g\q\1\2\6\t\y\k\l\5\c\a\b\r\o\j\6\z\c\v\z\d\6\3\4\c\w\e\4\q\f\1\i\l\r\v\e\o\5\i\j\3\b\l\x\j\p\k\c\w\y\1\v\v\5\l\0\2\g\p\q\e\s\8\s\q\i\b\e\f\d\m\n\w\r\z\t\8\3\e\2\o\a\f\g\c\q\x\9\b\y\8\5\i\n\y\x\f\4\h\s\0\b\0\e\4\8\p\7\s\y\w\o\n\0\y\9\s\k\y\n\b\b\8\g\k\w\k\2\d\1\5\z\6\9\u\e\l\1\z\c\w\e\g\t\6\t\4\6\p\c\9\9\e\u ]] 00:08:49.528 05:07:39 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:49.528 05:07:39 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:49.528 [2024-12-08 05:07:39.108335] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:49.528 [2024-12-08 05:07:39.108428] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70606 ] 00:08:49.528 [2024-12-08 05:07:39.244444] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.528 [2024-12-08 05:07:39.282795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.787  [2024-12-08T05:07:39.573Z] Copying: 512/512 [B] (average 166 kBps) 00:08:49.787 00:08:49.787 05:07:39 -- dd/posix.sh@93 -- # [[ 4itglvi0itoywo17bv3mu5jgfic1lwkuytkg8tizc2k7v1pmd15uqm0x40318hmg9aq3pcuj3aw7yr3vzlqwlxrdfwasbssjwo0c9vv067n8a7f0ceh5uewsiklud7kxcp0b5qb2ytnmoiwcgrm4sdkbyz7a6b3gi18jxi0jxft5rnh3jq40x5bzqfvrcxeh5uqekppz87w4y5gorkwfokilmbrgzis1ccbhbggvtl9z6yfhdnjflfaatze0r9weymvzaejx90euol4c5r2hg267id3vdl5d4sd9hl0eigbh71obxthxmqwc4jcmyx0bkueqb5tpf80ucml2tco14matm2w31c6ufuw3agq126tykl5cabroj6zcvzd634cwe4qf1ilrveo5ij3blxjpkcwy1vv5l02gpqes8sqibefdmnwrzt83e2oafgcqx9by85inyxf4hs0b0e48p7sywon0y9skynbb8gkwk2d15z69uel1zcwegt6t46pc99eu == \4\i\t\g\l\v\i\0\i\t\o\y\w\o\1\7\b\v\3\m\u\5\j\g\f\i\c\1\l\w\k\u\y\t\k\g\8\t\i\z\c\2\k\7\v\1\p\m\d\1\5\u\q\m\0\x\4\0\3\1\8\h\m\g\9\a\q\3\p\c\u\j\3\a\w\7\y\r\3\v\z\l\q\w\l\x\r\d\f\w\a\s\b\s\s\j\w\o\0\c\9\v\v\0\6\7\n\8\a\7\f\0\c\e\h\5\u\e\w\s\i\k\l\u\d\7\k\x\c\p\0\b\5\q\b\2\y\t\n\m\o\i\w\c\g\r\m\4\s\d\k\b\y\z\7\a\6\b\3\g\i\1\8\j\x\i\0\j\x\f\t\5\r\n\h\3\j\q\4\0\x\5\b\z\q\f\v\r\c\x\e\h\5\u\q\e\k\p\p\z\8\7\w\4\y\5\g\o\r\k\w\f\o\k\i\l\m\b\r\g\z\i\s\1\c\c\b\h\b\g\g\v\t\l\9\z\6\y\f\h\d\n\j\f\l\f\a\a\t\z\e\0\r\9\w\e\y\m\v\z\a\e\j\x\9\0\e\u\o\l\4\c\5\r\2\h\g\2\6\7\i\d\3\v\d\l\5\d\4\s\d\9\h\l\0\e\i\g\b\h\7\1\o\b\x\t\h\x\m\q\w\c\4\j\c\m\y\x\0\b\k\u\e\q\b\5\t\p\f\8\0\u\c\m\l\2\t\c\o\1\4\m\a\t\m\2\w\3\1\c\6\u\f\u\w\3\a\g\q\1\2\6\t\y\k\l\5\c\a\b\r\o\j\6\z\c\v\z\d\6\3\4\c\w\e\4\q\f\1\i\l\r\v\e\o\5\i\j\3\b\l\x\j\p\k\c\w\y\1\v\v\5\l\0\2\g\p\q\e\s\8\s\q\i\b\e\f\d\m\n\w\r\z\t\8\3\e\2\o\a\f\g\c\q\x\9\b\y\8\5\i\n\y\x\f\4\h\s\0\b\0\e\4\8\p\7\s\y\w\o\n\0\y\9\s\k\y\n\b\b\8\g\k\w\k\2\d\1\5\z\6\9\u\e\l\1\z\c\w\e\g\t\6\t\4\6\p\c\9\9\e\u ]] 00:08:49.787 05:07:39 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:49.787 05:07:39 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:49.787 [2024-12-08 05:07:39.545606] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:49.787 [2024-12-08 05:07:39.545736] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70618 ] 00:08:50.051 [2024-12-08 05:07:39.688871] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.051 [2024-12-08 05:07:39.728882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.051  [2024-12-08T05:07:40.096Z] Copying: 512/512 [B] (average 500 kBps) 00:08:50.310 00:08:50.310 ************************************ 00:08:50.310 END TEST dd_flags_misc_forced_aio 00:08:50.310 ************************************ 00:08:50.310 05:07:39 -- dd/posix.sh@93 -- # [[ 4itglvi0itoywo17bv3mu5jgfic1lwkuytkg8tizc2k7v1pmd15uqm0x40318hmg9aq3pcuj3aw7yr3vzlqwlxrdfwasbssjwo0c9vv067n8a7f0ceh5uewsiklud7kxcp0b5qb2ytnmoiwcgrm4sdkbyz7a6b3gi18jxi0jxft5rnh3jq40x5bzqfvrcxeh5uqekppz87w4y5gorkwfokilmbrgzis1ccbhbggvtl9z6yfhdnjflfaatze0r9weymvzaejx90euol4c5r2hg267id3vdl5d4sd9hl0eigbh71obxthxmqwc4jcmyx0bkueqb5tpf80ucml2tco14matm2w31c6ufuw3agq126tykl5cabroj6zcvzd634cwe4qf1ilrveo5ij3blxjpkcwy1vv5l02gpqes8sqibefdmnwrzt83e2oafgcqx9by85inyxf4hs0b0e48p7sywon0y9skynbb8gkwk2d15z69uel1zcwegt6t46pc99eu == \4\i\t\g\l\v\i\0\i\t\o\y\w\o\1\7\b\v\3\m\u\5\j\g\f\i\c\1\l\w\k\u\y\t\k\g\8\t\i\z\c\2\k\7\v\1\p\m\d\1\5\u\q\m\0\x\4\0\3\1\8\h\m\g\9\a\q\3\p\c\u\j\3\a\w\7\y\r\3\v\z\l\q\w\l\x\r\d\f\w\a\s\b\s\s\j\w\o\0\c\9\v\v\0\6\7\n\8\a\7\f\0\c\e\h\5\u\e\w\s\i\k\l\u\d\7\k\x\c\p\0\b\5\q\b\2\y\t\n\m\o\i\w\c\g\r\m\4\s\d\k\b\y\z\7\a\6\b\3\g\i\1\8\j\x\i\0\j\x\f\t\5\r\n\h\3\j\q\4\0\x\5\b\z\q\f\v\r\c\x\e\h\5\u\q\e\k\p\p\z\8\7\w\4\y\5\g\o\r\k\w\f\o\k\i\l\m\b\r\g\z\i\s\1\c\c\b\h\b\g\g\v\t\l\9\z\6\y\f\h\d\n\j\f\l\f\a\a\t\z\e\0\r\9\w\e\y\m\v\z\a\e\j\x\9\0\e\u\o\l\4\c\5\r\2\h\g\2\6\7\i\d\3\v\d\l\5\d\4\s\d\9\h\l\0\e\i\g\b\h\7\1\o\b\x\t\h\x\m\q\w\c\4\j\c\m\y\x\0\b\k\u\e\q\b\5\t\p\f\8\0\u\c\m\l\2\t\c\o\1\4\m\a\t\m\2\w\3\1\c\6\u\f\u\w\3\a\g\q\1\2\6\t\y\k\l\5\c\a\b\r\o\j\6\z\c\v\z\d\6\3\4\c\w\e\4\q\f\1\i\l\r\v\e\o\5\i\j\3\b\l\x\j\p\k\c\w\y\1\v\v\5\l\0\2\g\p\q\e\s\8\s\q\i\b\e\f\d\m\n\w\r\z\t\8\3\e\2\o\a\f\g\c\q\x\9\b\y\8\5\i\n\y\x\f\4\h\s\0\b\0\e\4\8\p\7\s\y\w\o\n\0\y\9\s\k\y\n\b\b\8\g\k\w\k\2\d\1\5\z\6\9\u\e\l\1\z\c\w\e\g\t\6\t\4\6\p\c\9\9\e\u ]] 00:08:50.310 00:08:50.310 real 0m3.534s 00:08:50.310 user 0m1.739s 00:08:50.310 sys 0m0.807s 00:08:50.310 05:07:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:50.310 05:07:39 -- common/autotest_common.sh@10 -- # set +x 00:08:50.310 05:07:39 -- dd/posix.sh@1 -- # cleanup 00:08:50.310 05:07:39 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:50.310 05:07:39 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:50.310 ************************************ 00:08:50.310 END TEST spdk_dd_posix 00:08:50.310 ************************************ 00:08:50.310 00:08:50.310 real 0m16.739s 00:08:50.310 user 0m7.131s 00:08:50.310 sys 0m3.755s 00:08:50.310 05:07:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:50.310 05:07:39 -- common/autotest_common.sh@10 -- # set +x 00:08:50.310 05:07:40 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:50.310 05:07:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:50.310 05:07:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:50.310 05:07:40 -- common/autotest_common.sh@10 -- # set +x 00:08:50.310 ************************************ 00:08:50.310 START TEST spdk_dd_malloc 00:08:50.310 ************************************ 00:08:50.310 05:07:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:50.569 * Looking for test storage... 00:08:50.569 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:50.569 05:07:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:50.569 05:07:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:50.569 05:07:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:50.569 05:07:40 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:50.569 05:07:40 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:50.569 05:07:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:50.569 05:07:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:50.569 05:07:40 -- scripts/common.sh@335 -- # IFS=.-: 00:08:50.569 05:07:40 -- scripts/common.sh@335 -- # read -ra ver1 00:08:50.569 05:07:40 -- scripts/common.sh@336 -- # IFS=.-: 00:08:50.569 05:07:40 -- scripts/common.sh@336 -- # read -ra ver2 00:08:50.569 05:07:40 -- scripts/common.sh@337 -- # local 'op=<' 00:08:50.569 05:07:40 -- scripts/common.sh@339 -- # ver1_l=2 00:08:50.569 05:07:40 -- scripts/common.sh@340 -- # ver2_l=1 00:08:50.569 05:07:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:50.569 05:07:40 -- scripts/common.sh@343 -- # case "$op" in 00:08:50.569 05:07:40 -- scripts/common.sh@344 -- # : 1 00:08:50.569 05:07:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:50.569 05:07:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:50.569 05:07:40 -- scripts/common.sh@364 -- # decimal 1 00:08:50.569 05:07:40 -- scripts/common.sh@352 -- # local d=1 00:08:50.569 05:07:40 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:50.569 05:07:40 -- scripts/common.sh@354 -- # echo 1 00:08:50.569 05:07:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:50.569 05:07:40 -- scripts/common.sh@365 -- # decimal 2 00:08:50.569 05:07:40 -- scripts/common.sh@352 -- # local d=2 00:08:50.569 05:07:40 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:50.569 05:07:40 -- scripts/common.sh@354 -- # echo 2 00:08:50.569 05:07:40 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:50.569 05:07:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:50.569 05:07:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:50.569 05:07:40 -- scripts/common.sh@367 -- # return 0 00:08:50.569 05:07:40 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:50.569 05:07:40 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:50.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.569 --rc genhtml_branch_coverage=1 00:08:50.569 --rc genhtml_function_coverage=1 00:08:50.569 --rc genhtml_legend=1 00:08:50.569 --rc geninfo_all_blocks=1 00:08:50.569 --rc geninfo_unexecuted_blocks=1 00:08:50.569 00:08:50.569 ' 00:08:50.569 05:07:40 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:50.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.569 --rc genhtml_branch_coverage=1 00:08:50.569 --rc genhtml_function_coverage=1 00:08:50.569 --rc genhtml_legend=1 00:08:50.569 --rc geninfo_all_blocks=1 00:08:50.569 --rc geninfo_unexecuted_blocks=1 00:08:50.569 00:08:50.569 ' 00:08:50.569 05:07:40 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:50.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.569 --rc genhtml_branch_coverage=1 00:08:50.569 --rc genhtml_function_coverage=1 00:08:50.569 --rc genhtml_legend=1 00:08:50.569 --rc geninfo_all_blocks=1 00:08:50.569 --rc geninfo_unexecuted_blocks=1 00:08:50.570 00:08:50.570 ' 00:08:50.570 05:07:40 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:50.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.570 --rc genhtml_branch_coverage=1 00:08:50.570 --rc genhtml_function_coverage=1 00:08:50.570 --rc genhtml_legend=1 00:08:50.570 --rc geninfo_all_blocks=1 00:08:50.570 --rc geninfo_unexecuted_blocks=1 00:08:50.570 00:08:50.570 ' 00:08:50.570 05:07:40 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:50.570 05:07:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:50.570 05:07:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:50.570 05:07:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:50.570 05:07:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.570 05:07:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.570 05:07:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.570 05:07:40 -- paths/export.sh@5 -- # export PATH 00:08:50.570 05:07:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.570 05:07:40 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:08:50.570 05:07:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:50.570 05:07:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:50.570 05:07:40 -- common/autotest_common.sh@10 -- # set +x 00:08:50.570 ************************************ 00:08:50.570 START TEST dd_malloc_copy 00:08:50.570 ************************************ 00:08:50.570 05:07:40 -- common/autotest_common.sh@1114 -- # malloc_copy 00:08:50.570 05:07:40 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:08:50.570 05:07:40 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:08:50.570 05:07:40 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:50.570 05:07:40 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:08:50.570 05:07:40 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:08:50.570 05:07:40 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:08:50.570 05:07:40 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:08:50.570 05:07:40 -- dd/malloc.sh@28 -- # gen_conf 00:08:50.570 05:07:40 -- dd/common.sh@31 -- # xtrace_disable 00:08:50.570 05:07:40 -- common/autotest_common.sh@10 -- # set +x 00:08:50.570 [2024-12-08 05:07:40.305828] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:50.570 [2024-12-08 05:07:40.305947] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70689 ] 00:08:50.570 { 00:08:50.570 "subsystems": [ 00:08:50.570 { 00:08:50.570 "subsystem": "bdev", 00:08:50.570 "config": [ 00:08:50.570 { 00:08:50.570 "params": { 00:08:50.570 "block_size": 512, 00:08:50.570 "num_blocks": 1048576, 00:08:50.570 "name": "malloc0" 00:08:50.570 }, 00:08:50.570 "method": "bdev_malloc_create" 00:08:50.570 }, 00:08:50.570 { 00:08:50.570 "params": { 00:08:50.570 "block_size": 512, 00:08:50.570 "num_blocks": 1048576, 00:08:50.570 "name": "malloc1" 00:08:50.570 }, 00:08:50.570 "method": "bdev_malloc_create" 00:08:50.570 }, 00:08:50.570 { 00:08:50.570 "method": "bdev_wait_for_examine" 00:08:50.570 } 00:08:50.570 ] 00:08:50.570 } 00:08:50.570 ] 00:08:50.570 } 00:08:50.829 [2024-12-08 05:07:40.452723] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.829 [2024-12-08 05:07:40.496001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.220  [2024-12-08T05:07:42.960Z] Copying: 184/512 [MB] (184 MBps) [2024-12-08T05:07:43.526Z] Copying: 383/512 [MB] (199 MBps) [2024-12-08T05:07:43.784Z] Copying: 512/512 [MB] (average 194 MBps) 00:08:53.998 00:08:53.998 05:07:43 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:08:53.998 05:07:43 -- dd/malloc.sh@33 -- # gen_conf 00:08:53.998 05:07:43 -- dd/common.sh@31 -- # xtrace_disable 00:08:53.998 05:07:43 -- common/autotest_common.sh@10 -- # set +x 00:08:53.998 [2024-12-08 05:07:43.781796] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:53.998 [2024-12-08 05:07:43.782572] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70742 ] 00:08:54.256 { 00:08:54.256 "subsystems": [ 00:08:54.256 { 00:08:54.256 "subsystem": "bdev", 00:08:54.256 "config": [ 00:08:54.256 { 00:08:54.256 "params": { 00:08:54.256 "block_size": 512, 00:08:54.256 "num_blocks": 1048576, 00:08:54.256 "name": "malloc0" 00:08:54.256 }, 00:08:54.257 "method": "bdev_malloc_create" 00:08:54.257 }, 00:08:54.257 { 00:08:54.257 "params": { 00:08:54.257 "block_size": 512, 00:08:54.257 "num_blocks": 1048576, 00:08:54.257 "name": "malloc1" 00:08:54.257 }, 00:08:54.257 "method": "bdev_malloc_create" 00:08:54.257 }, 00:08:54.257 { 00:08:54.257 "method": "bdev_wait_for_examine" 00:08:54.257 } 00:08:54.257 ] 00:08:54.257 } 00:08:54.257 ] 00:08:54.257 } 00:08:54.257 [2024-12-08 05:07:43.921301] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.257 [2024-12-08 05:07:43.959673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.630  [2024-12-08T05:07:46.349Z] Copying: 186/512 [MB] (186 MBps) [2024-12-08T05:07:47.282Z] Copying: 367/512 [MB] (180 MBps) [2024-12-08T05:07:47.282Z] Copying: 512/512 [MB] (average 186 MBps) 00:08:57.496 00:08:57.496 00:08:57.496 real 0m7.021s 00:08:57.496 user 0m6.297s 00:08:57.496 sys 0m0.551s 00:08:57.496 05:07:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:57.496 05:07:47 -- common/autotest_common.sh@10 -- # set +x 00:08:57.496 ************************************ 00:08:57.496 END TEST dd_malloc_copy 00:08:57.496 ************************************ 00:08:57.754 ************************************ 00:08:57.754 END TEST spdk_dd_malloc 00:08:57.754 ************************************ 00:08:57.754 00:08:57.754 real 0m7.257s 00:08:57.754 user 0m6.431s 00:08:57.754 sys 0m0.652s 00:08:57.754 05:07:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:57.754 05:07:47 -- common/autotest_common.sh@10 -- # set +x 00:08:57.754 05:07:47 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 0000:00:07.0 00:08:57.754 05:07:47 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:57.754 05:07:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:57.754 05:07:47 -- common/autotest_common.sh@10 -- # set +x 00:08:57.754 ************************************ 00:08:57.754 START TEST spdk_dd_bdev_to_bdev 00:08:57.754 ************************************ 00:08:57.754 05:07:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 0000:00:07.0 00:08:57.754 * Looking for test storage... 00:08:57.754 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:57.755 05:07:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:57.755 05:07:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:57.755 05:07:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:57.755 05:07:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:57.755 05:07:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:57.755 05:07:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:57.755 05:07:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:57.755 05:07:47 -- scripts/common.sh@335 -- # IFS=.-: 00:08:57.755 05:07:47 -- scripts/common.sh@335 -- # read -ra ver1 00:08:57.755 05:07:47 -- scripts/common.sh@336 -- # IFS=.-: 00:08:57.755 05:07:47 -- scripts/common.sh@336 -- # read -ra ver2 00:08:57.755 05:07:47 -- scripts/common.sh@337 -- # local 'op=<' 00:08:57.755 05:07:47 -- scripts/common.sh@339 -- # ver1_l=2 00:08:57.755 05:07:47 -- scripts/common.sh@340 -- # ver2_l=1 00:08:57.755 05:07:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:57.755 05:07:47 -- scripts/common.sh@343 -- # case "$op" in 00:08:57.755 05:07:47 -- scripts/common.sh@344 -- # : 1 00:08:57.755 05:07:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:57.755 05:07:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:57.755 05:07:47 -- scripts/common.sh@364 -- # decimal 1 00:08:57.755 05:07:47 -- scripts/common.sh@352 -- # local d=1 00:08:57.755 05:07:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:57.755 05:07:47 -- scripts/common.sh@354 -- # echo 1 00:08:57.755 05:07:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:57.755 05:07:47 -- scripts/common.sh@365 -- # decimal 2 00:08:57.755 05:07:47 -- scripts/common.sh@352 -- # local d=2 00:08:57.755 05:07:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:57.755 05:07:47 -- scripts/common.sh@354 -- # echo 2 00:08:57.755 05:07:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:57.755 05:07:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:57.755 05:07:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:57.755 05:07:47 -- scripts/common.sh@367 -- # return 0 00:08:57.755 05:07:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:57.755 05:07:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:57.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.755 --rc genhtml_branch_coverage=1 00:08:57.755 --rc genhtml_function_coverage=1 00:08:57.755 --rc genhtml_legend=1 00:08:57.755 --rc geninfo_all_blocks=1 00:08:57.755 --rc geninfo_unexecuted_blocks=1 00:08:57.755 00:08:57.755 ' 00:08:57.755 05:07:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:57.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.755 --rc genhtml_branch_coverage=1 00:08:57.755 --rc genhtml_function_coverage=1 00:08:57.755 --rc genhtml_legend=1 00:08:57.755 --rc geninfo_all_blocks=1 00:08:57.755 --rc geninfo_unexecuted_blocks=1 00:08:57.755 00:08:57.755 ' 00:08:57.755 05:07:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:57.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.755 --rc genhtml_branch_coverage=1 00:08:57.755 --rc genhtml_function_coverage=1 00:08:57.755 --rc genhtml_legend=1 00:08:57.755 --rc geninfo_all_blocks=1 00:08:57.755 --rc geninfo_unexecuted_blocks=1 00:08:57.755 00:08:57.755 ' 00:08:57.755 05:07:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:57.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.755 --rc genhtml_branch_coverage=1 00:08:57.755 --rc genhtml_function_coverage=1 00:08:57.755 --rc genhtml_legend=1 00:08:57.755 --rc geninfo_all_blocks=1 00:08:57.755 --rc geninfo_unexecuted_blocks=1 00:08:57.755 00:08:57.755 ' 00:08:57.755 05:07:47 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:57.755 05:07:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:57.755 05:07:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:57.755 05:07:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:57.755 05:07:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.755 05:07:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.755 05:07:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.755 05:07:47 -- paths/export.sh@5 -- # export PATH 00:08:57.755 05:07:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.755 05:07:47 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:08:57.755 05:07:47 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:08:57.755 05:07:47 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:08:57.755 05:07:47 -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:08:57.755 05:07:47 -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:08:57.755 05:07:47 -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:08:57.755 05:07:47 -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:06.0 00:08:57.755 05:07:47 -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:08:57.755 05:07:47 -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:08:57.755 05:07:47 -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:07.0 00:08:57.755 05:07:47 -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:08:57.755 05:07:47 -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:08:57.755 05:07:47 -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:07.0' ['trtype']='pcie') 00:08:58.013 05:07:47 -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:08:58.013 05:07:47 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:58.013 05:07:47 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:58.013 05:07:47 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:08:58.013 05:07:47 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:08:58.013 05:07:47 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:58.013 05:07:47 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:08:58.013 05:07:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:58.013 05:07:47 -- common/autotest_common.sh@10 -- # set +x 00:08:58.013 ************************************ 00:08:58.013 START TEST dd_inflate_file 00:08:58.013 ************************************ 00:08:58.013 05:07:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:58.013 [2024-12-08 05:07:47.599791] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:58.013 [2024-12-08 05:07:47.600116] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70848 ] 00:08:58.013 [2024-12-08 05:07:47.742131] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.013 [2024-12-08 05:07:47.782781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.272  [2024-12-08T05:07:48.058Z] Copying: 64/64 [MB] (average 1684 MBps) 00:08:58.272 00:08:58.272 ************************************ 00:08:58.272 END TEST dd_inflate_file 00:08:58.272 ************************************ 00:08:58.272 00:08:58.272 real 0m0.484s 00:08:58.272 user 0m0.223s 00:08:58.272 sys 0m0.140s 00:08:58.272 05:07:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:58.272 05:07:48 -- common/autotest_common.sh@10 -- # set +x 00:08:58.530 05:07:48 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:08:58.530 05:07:48 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:08:58.530 05:07:48 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:58.530 05:07:48 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:08:58.530 05:07:48 -- dd/common.sh@31 -- # xtrace_disable 00:08:58.530 05:07:48 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:08:58.530 05:07:48 -- common/autotest_common.sh@10 -- # set +x 00:08:58.530 05:07:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:58.530 05:07:48 -- common/autotest_common.sh@10 -- # set +x 00:08:58.530 ************************************ 00:08:58.530 START TEST dd_copy_to_out_bdev 00:08:58.530 ************************************ 00:08:58.530 05:07:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:58.530 [2024-12-08 05:07:48.134557] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:58.530 [2024-12-08 05:07:48.134880] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefi{ 00:08:58.530 "subsystems": [ 00:08:58.530 { 00:08:58.530 "subsystem": "bdev", 00:08:58.530 "config": [ 00:08:58.530 { 00:08:58.530 "params": { 00:08:58.530 "trtype": "pcie", 00:08:58.530 "traddr": "0000:00:06.0", 00:08:58.530 "name": "Nvme0" 00:08:58.530 }, 00:08:58.530 "method": "bdev_nvme_attach_controller" 00:08:58.530 }, 00:08:58.530 { 00:08:58.530 "params": { 00:08:58.530 "trtype": "pcie", 00:08:58.530 "traddr": "0000:00:07.0", 00:08:58.530 "name": "Nvme1" 00:08:58.530 }, 00:08:58.530 "method": "bdev_nvme_attach_controller" 00:08:58.530 }, 00:08:58.530 { 00:08:58.530 "method": "bdev_wait_for_examine" 00:08:58.530 } 00:08:58.530 ] 00:08:58.530 } 00:08:58.530 ] 00:08:58.530 } 00:08:58.530 x=spdk_pid70887 ] 00:08:58.530 [2024-12-08 05:07:48.277549] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.789 [2024-12-08 05:07:48.321055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.776  [2024-12-08T05:07:49.822Z] Copying: 52/64 [MB] (52 MBps) [2024-12-08T05:07:50.080Z] Copying: 64/64 [MB] (average 51 MBps) 00:09:00.294 00:09:00.294 ************************************ 00:09:00.294 END TEST dd_copy_to_out_bdev 00:09:00.294 ************************************ 00:09:00.294 00:09:00.294 real 0m1.888s 00:09:00.294 user 0m1.638s 00:09:00.294 sys 0m0.167s 00:09:00.294 05:07:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:00.294 05:07:49 -- common/autotest_common.sh@10 -- # set +x 00:09:00.294 05:07:50 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:09:00.294 05:07:50 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:09:00.294 05:07:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:00.294 05:07:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:00.294 05:07:50 -- common/autotest_common.sh@10 -- # set +x 00:09:00.294 ************************************ 00:09:00.294 START TEST dd_offset_magic 00:09:00.294 ************************************ 00:09:00.294 05:07:50 -- common/autotest_common.sh@1114 -- # offset_magic 00:09:00.294 05:07:50 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:09:00.294 05:07:50 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:09:00.294 05:07:50 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:09:00.294 05:07:50 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:09:00.294 05:07:50 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:09:00.294 05:07:50 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:09:00.294 05:07:50 -- dd/common.sh@31 -- # xtrace_disable 00:09:00.294 05:07:50 -- common/autotest_common.sh@10 -- # set +x 00:09:00.295 [2024-12-08 05:07:50.077503] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:00.295 [2024-12-08 05:07:50.077832] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70929 ] 00:09:00.553 { 00:09:00.553 "subsystems": [ 00:09:00.553 { 00:09:00.553 "subsystem": "bdev", 00:09:00.553 "config": [ 00:09:00.553 { 00:09:00.553 "params": { 00:09:00.553 "trtype": "pcie", 00:09:00.553 "traddr": "0000:00:06.0", 00:09:00.553 "name": "Nvme0" 00:09:00.553 }, 00:09:00.553 "method": "bdev_nvme_attach_controller" 00:09:00.553 }, 00:09:00.553 { 00:09:00.553 "params": { 00:09:00.553 "trtype": "pcie", 00:09:00.553 "traddr": "0000:00:07.0", 00:09:00.553 "name": "Nvme1" 00:09:00.553 }, 00:09:00.553 "method": "bdev_nvme_attach_controller" 00:09:00.553 }, 00:09:00.553 { 00:09:00.553 "method": "bdev_wait_for_examine" 00:09:00.553 } 00:09:00.553 ] 00:09:00.553 } 00:09:00.553 ] 00:09:00.553 } 00:09:00.553 [2024-12-08 05:07:50.217126] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.553 [2024-12-08 05:07:50.252798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.812  [2024-12-08T05:07:50.857Z] Copying: 65/65 [MB] (average 1015 MBps) 00:09:01.071 00:09:01.071 05:07:50 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:09:01.071 05:07:50 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:09:01.071 05:07:50 -- dd/common.sh@31 -- # xtrace_disable 00:09:01.071 05:07:50 -- common/autotest_common.sh@10 -- # set +x 00:09:01.071 [2024-12-08 05:07:50.748813] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:01.071 [2024-12-08 05:07:50.749108] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70938 ] 00:09:01.071 { 00:09:01.071 "subsystems": [ 00:09:01.071 { 00:09:01.071 "subsystem": "bdev", 00:09:01.071 "config": [ 00:09:01.071 { 00:09:01.071 "params": { 00:09:01.071 "trtype": "pcie", 00:09:01.071 "traddr": "0000:00:06.0", 00:09:01.071 "name": "Nvme0" 00:09:01.071 }, 00:09:01.071 "method": "bdev_nvme_attach_controller" 00:09:01.071 }, 00:09:01.071 { 00:09:01.071 "params": { 00:09:01.071 "trtype": "pcie", 00:09:01.071 "traddr": "0000:00:07.0", 00:09:01.071 "name": "Nvme1" 00:09:01.071 }, 00:09:01.071 "method": "bdev_nvme_attach_controller" 00:09:01.071 }, 00:09:01.071 { 00:09:01.071 "method": "bdev_wait_for_examine" 00:09:01.071 } 00:09:01.071 ] 00:09:01.071 } 00:09:01.071 ] 00:09:01.071 } 00:09:01.331 [2024-12-08 05:07:50.895338] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.331 [2024-12-08 05:07:50.931347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.331  [2024-12-08T05:07:51.376Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:09:01.590 00:09:01.590 05:07:51 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:09:01.590 05:07:51 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:09:01.590 05:07:51 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:09:01.590 05:07:51 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:09:01.590 05:07:51 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:09:01.590 05:07:51 -- dd/common.sh@31 -- # xtrace_disable 00:09:01.590 05:07:51 -- common/autotest_common.sh@10 -- # set +x 00:09:01.590 [2024-12-08 05:07:51.352313] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:01.590 [2024-12-08 05:07:51.352409] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70958 ] 00:09:01.590 { 00:09:01.590 "subsystems": [ 00:09:01.590 { 00:09:01.590 "subsystem": "bdev", 00:09:01.590 "config": [ 00:09:01.590 { 00:09:01.590 "params": { 00:09:01.590 "trtype": "pcie", 00:09:01.590 "traddr": "0000:00:06.0", 00:09:01.590 "name": "Nvme0" 00:09:01.590 }, 00:09:01.590 "method": "bdev_nvme_attach_controller" 00:09:01.590 }, 00:09:01.590 { 00:09:01.590 "params": { 00:09:01.590 "trtype": "pcie", 00:09:01.590 "traddr": "0000:00:07.0", 00:09:01.590 "name": "Nvme1" 00:09:01.590 }, 00:09:01.590 "method": "bdev_nvme_attach_controller" 00:09:01.590 }, 00:09:01.590 { 00:09:01.590 "method": "bdev_wait_for_examine" 00:09:01.590 } 00:09:01.590 ] 00:09:01.590 } 00:09:01.590 ] 00:09:01.590 } 00:09:01.849 [2024-12-08 05:07:51.493331] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.849 [2024-12-08 05:07:51.534650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.110  [2024-12-08T05:07:52.155Z] Copying: 65/65 [MB] (average 890 MBps) 00:09:02.369 00:09:02.369 05:07:52 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:09:02.369 05:07:52 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:09:02.369 05:07:52 -- dd/common.sh@31 -- # xtrace_disable 00:09:02.369 05:07:52 -- common/autotest_common.sh@10 -- # set +x 00:09:02.369 [2024-12-08 05:07:52.118156] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:02.369 [2024-12-08 05:07:52.119088] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70978 ] 00:09:02.369 { 00:09:02.369 "subsystems": [ 00:09:02.369 { 00:09:02.369 "subsystem": "bdev", 00:09:02.369 "config": [ 00:09:02.369 { 00:09:02.369 "params": { 00:09:02.369 "trtype": "pcie", 00:09:02.369 "traddr": "0000:00:06.0", 00:09:02.369 "name": "Nvme0" 00:09:02.369 }, 00:09:02.369 "method": "bdev_nvme_attach_controller" 00:09:02.369 }, 00:09:02.369 { 00:09:02.369 "params": { 00:09:02.369 "trtype": "pcie", 00:09:02.369 "traddr": "0000:00:07.0", 00:09:02.369 "name": "Nvme1" 00:09:02.369 }, 00:09:02.369 "method": "bdev_nvme_attach_controller" 00:09:02.369 }, 00:09:02.369 { 00:09:02.369 "method": "bdev_wait_for_examine" 00:09:02.369 } 00:09:02.369 ] 00:09:02.369 } 00:09:02.369 ] 00:09:02.369 } 00:09:02.627 [2024-12-08 05:07:52.261620] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.627 [2024-12-08 05:07:52.298390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.885  [2024-12-08T05:07:52.671Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:09:02.885 00:09:02.885 05:07:52 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:09:02.885 05:07:52 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:09:02.885 00:09:02.885 real 0m2.623s 00:09:02.885 user 0m1.893s 00:09:02.885 sys 0m0.531s 00:09:02.885 05:07:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:02.885 05:07:52 -- common/autotest_common.sh@10 -- # set +x 00:09:02.885 ************************************ 00:09:02.885 END TEST dd_offset_magic 00:09:02.885 ************************************ 00:09:03.142 05:07:52 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:09:03.142 05:07:52 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:09:03.142 05:07:52 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:09:03.142 05:07:52 -- dd/common.sh@11 -- # local nvme_ref= 00:09:03.142 05:07:52 -- dd/common.sh@12 -- # local size=4194330 00:09:03.142 05:07:52 -- dd/common.sh@14 -- # local bs=1048576 00:09:03.142 05:07:52 -- dd/common.sh@15 -- # local count=5 00:09:03.142 05:07:52 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:09:03.142 05:07:52 -- dd/common.sh@18 -- # gen_conf 00:09:03.142 05:07:52 -- dd/common.sh@31 -- # xtrace_disable 00:09:03.142 05:07:52 -- common/autotest_common.sh@10 -- # set +x 00:09:03.142 [2024-12-08 05:07:52.747164] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:03.142 [2024-12-08 05:07:52.747257] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71002 ] 00:09:03.142 { 00:09:03.142 "subsystems": [ 00:09:03.142 { 00:09:03.142 "subsystem": "bdev", 00:09:03.142 "config": [ 00:09:03.142 { 00:09:03.142 "params": { 00:09:03.142 "trtype": "pcie", 00:09:03.142 "traddr": "0000:00:06.0", 00:09:03.142 "name": "Nvme0" 00:09:03.142 }, 00:09:03.142 "method": "bdev_nvme_attach_controller" 00:09:03.142 }, 00:09:03.142 { 00:09:03.142 "params": { 00:09:03.142 "trtype": "pcie", 00:09:03.142 "traddr": "0000:00:07.0", 00:09:03.142 "name": "Nvme1" 00:09:03.142 }, 00:09:03.142 "method": "bdev_nvme_attach_controller" 00:09:03.142 }, 00:09:03.142 { 00:09:03.142 "method": "bdev_wait_for_examine" 00:09:03.142 } 00:09:03.142 ] 00:09:03.142 } 00:09:03.142 ] 00:09:03.142 } 00:09:03.142 [2024-12-08 05:07:52.885046] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.142 [2024-12-08 05:07:52.926598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.400  [2024-12-08T05:07:53.444Z] Copying: 5120/5120 [kB] (average 1250 MBps) 00:09:03.658 00:09:03.658 05:07:53 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:09:03.658 05:07:53 -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:09:03.658 05:07:53 -- dd/common.sh@11 -- # local nvme_ref= 00:09:03.658 05:07:53 -- dd/common.sh@12 -- # local size=4194330 00:09:03.658 05:07:53 -- dd/common.sh@14 -- # local bs=1048576 00:09:03.658 05:07:53 -- dd/common.sh@15 -- # local count=5 00:09:03.658 05:07:53 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:09:03.658 05:07:53 -- dd/common.sh@18 -- # gen_conf 00:09:03.659 05:07:53 -- dd/common.sh@31 -- # xtrace_disable 00:09:03.659 05:07:53 -- common/autotest_common.sh@10 -- # set +x 00:09:03.659 [2024-12-08 05:07:53.328143] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:03.659 [2024-12-08 05:07:53.328401] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71022 ] 00:09:03.659 { 00:09:03.659 "subsystems": [ 00:09:03.659 { 00:09:03.659 "subsystem": "bdev", 00:09:03.659 "config": [ 00:09:03.659 { 00:09:03.659 "params": { 00:09:03.659 "trtype": "pcie", 00:09:03.659 "traddr": "0000:00:06.0", 00:09:03.659 "name": "Nvme0" 00:09:03.659 }, 00:09:03.659 "method": "bdev_nvme_attach_controller" 00:09:03.659 }, 00:09:03.659 { 00:09:03.659 "params": { 00:09:03.659 "trtype": "pcie", 00:09:03.659 "traddr": "0000:00:07.0", 00:09:03.659 "name": "Nvme1" 00:09:03.659 }, 00:09:03.659 "method": "bdev_nvme_attach_controller" 00:09:03.659 }, 00:09:03.659 { 00:09:03.659 "method": "bdev_wait_for_examine" 00:09:03.659 } 00:09:03.659 ] 00:09:03.659 } 00:09:03.659 ] 00:09:03.659 } 00:09:03.917 [2024-12-08 05:07:53.463991] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.917 [2024-12-08 05:07:53.502493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.917  [2024-12-08T05:07:53.962Z] Copying: 5120/5120 [kB] (average 1000 MBps) 00:09:04.176 00:09:04.176 05:07:53 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:09:04.176 ************************************ 00:09:04.176 END TEST spdk_dd_bdev_to_bdev 00:09:04.176 ************************************ 00:09:04.176 00:09:04.176 real 0m6.517s 00:09:04.176 user 0m4.746s 00:09:04.176 sys 0m1.276s 00:09:04.176 05:07:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:04.176 05:07:53 -- common/autotest_common.sh@10 -- # set +x 00:09:04.176 05:07:53 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:09:04.176 05:07:53 -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:09:04.176 05:07:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:04.176 05:07:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:04.176 05:07:53 -- common/autotest_common.sh@10 -- # set +x 00:09:04.176 ************************************ 00:09:04.176 START TEST spdk_dd_uring 00:09:04.176 ************************************ 00:09:04.176 05:07:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:09:04.484 * Looking for test storage... 00:09:04.484 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:04.484 05:07:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:04.484 05:07:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:04.484 05:07:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:04.484 05:07:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:04.484 05:07:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:04.484 05:07:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:04.484 05:07:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:04.484 05:07:54 -- scripts/common.sh@335 -- # IFS=.-: 00:09:04.484 05:07:54 -- scripts/common.sh@335 -- # read -ra ver1 00:09:04.484 05:07:54 -- scripts/common.sh@336 -- # IFS=.-: 00:09:04.484 05:07:54 -- scripts/common.sh@336 -- # read -ra ver2 00:09:04.484 05:07:54 -- scripts/common.sh@337 -- # local 'op=<' 00:09:04.484 05:07:54 -- scripts/common.sh@339 -- # ver1_l=2 00:09:04.484 05:07:54 -- scripts/common.sh@340 -- # ver2_l=1 00:09:04.484 05:07:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:04.484 05:07:54 -- scripts/common.sh@343 -- # case "$op" in 00:09:04.484 05:07:54 -- scripts/common.sh@344 -- # : 1 00:09:04.484 05:07:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:04.484 05:07:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:04.484 05:07:54 -- scripts/common.sh@364 -- # decimal 1 00:09:04.484 05:07:54 -- scripts/common.sh@352 -- # local d=1 00:09:04.484 05:07:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:04.484 05:07:54 -- scripts/common.sh@354 -- # echo 1 00:09:04.484 05:07:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:04.484 05:07:54 -- scripts/common.sh@365 -- # decimal 2 00:09:04.484 05:07:54 -- scripts/common.sh@352 -- # local d=2 00:09:04.484 05:07:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:04.484 05:07:54 -- scripts/common.sh@354 -- # echo 2 00:09:04.484 05:07:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:04.484 05:07:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:04.484 05:07:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:04.484 05:07:54 -- scripts/common.sh@367 -- # return 0 00:09:04.484 05:07:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:04.484 05:07:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:04.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.484 --rc genhtml_branch_coverage=1 00:09:04.484 --rc genhtml_function_coverage=1 00:09:04.484 --rc genhtml_legend=1 00:09:04.484 --rc geninfo_all_blocks=1 00:09:04.484 --rc geninfo_unexecuted_blocks=1 00:09:04.484 00:09:04.484 ' 00:09:04.484 05:07:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:04.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.484 --rc genhtml_branch_coverage=1 00:09:04.484 --rc genhtml_function_coverage=1 00:09:04.484 --rc genhtml_legend=1 00:09:04.484 --rc geninfo_all_blocks=1 00:09:04.484 --rc geninfo_unexecuted_blocks=1 00:09:04.484 00:09:04.484 ' 00:09:04.484 05:07:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:04.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.484 --rc genhtml_branch_coverage=1 00:09:04.484 --rc genhtml_function_coverage=1 00:09:04.484 --rc genhtml_legend=1 00:09:04.484 --rc geninfo_all_blocks=1 00:09:04.484 --rc geninfo_unexecuted_blocks=1 00:09:04.484 00:09:04.484 ' 00:09:04.484 05:07:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:04.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.484 --rc genhtml_branch_coverage=1 00:09:04.484 --rc genhtml_function_coverage=1 00:09:04.484 --rc genhtml_legend=1 00:09:04.484 --rc geninfo_all_blocks=1 00:09:04.484 --rc geninfo_unexecuted_blocks=1 00:09:04.484 00:09:04.484 ' 00:09:04.484 05:07:54 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:04.484 05:07:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:04.484 05:07:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:04.484 05:07:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:04.484 05:07:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.484 05:07:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.484 05:07:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.484 05:07:54 -- paths/export.sh@5 -- # export PATH 00:09:04.484 05:07:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.484 05:07:54 -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:09:04.484 05:07:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:04.484 05:07:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:04.484 05:07:54 -- common/autotest_common.sh@10 -- # set +x 00:09:04.484 ************************************ 00:09:04.484 START TEST dd_uring_copy 00:09:04.484 ************************************ 00:09:04.484 05:07:54 -- common/autotest_common.sh@1114 -- # uring_zram_copy 00:09:04.484 05:07:54 -- dd/uring.sh@15 -- # local zram_dev_id 00:09:04.484 05:07:54 -- dd/uring.sh@16 -- # local magic 00:09:04.484 05:07:54 -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:09:04.484 05:07:54 -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:09:04.484 05:07:54 -- dd/uring.sh@19 -- # local verify_magic 00:09:04.484 05:07:54 -- dd/uring.sh@21 -- # init_zram 00:09:04.484 05:07:54 -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:09:04.484 05:07:54 -- dd/common.sh@164 -- # return 00:09:04.484 05:07:54 -- dd/uring.sh@22 -- # create_zram_dev 00:09:04.484 05:07:54 -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:09:04.484 05:07:54 -- dd/uring.sh@22 -- # zram_dev_id=1 00:09:04.484 05:07:54 -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:09:04.484 05:07:54 -- dd/common.sh@181 -- # local id=1 00:09:04.484 05:07:54 -- dd/common.sh@182 -- # local size=512M 00:09:04.484 05:07:54 -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:09:04.484 05:07:54 -- dd/common.sh@186 -- # echo 512M 00:09:04.484 05:07:54 -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:09:04.485 05:07:54 -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:09:04.485 05:07:54 -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:09:04.485 05:07:54 -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:09:04.485 05:07:54 -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:09:04.485 05:07:54 -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:09:04.485 05:07:54 -- dd/uring.sh@41 -- # gen_bytes 1024 00:09:04.485 05:07:54 -- dd/common.sh@98 -- # xtrace_disable 00:09:04.485 05:07:54 -- common/autotest_common.sh@10 -- # set +x 00:09:04.485 05:07:54 -- dd/uring.sh@41 -- # magic=zkalwe2t3o08duwkemtyspyq3zu5627vxxka6lpk17zi0lhxmmyym05ae9sd6myvtlwbqokhf61fqgsvwcptdvr8l08d22og87u8jc4z4qr8is74jf8g5x1x5q3950u9f7yg4t5sbrchh2rgegbaabixtzfhbsjpdx0a1wnggek09rb42f5rvpr1urrhayeprikspvcb9lc83y0y02aroxo8715wlzzgdbsw6g1p83te8c3pjmnne0bnu91cn6h8in8z1mq92g3483tjjigeufbzhx6v4n2mlnnhlgalnpklfi3d76byzv2omrk1xrlcngt2gxbhe1ov53jlsomc1f2xwysslwunf23o564ehfys4m9ixms6rxch1au9dvqe2un68xxwtujm3y7g9fz42euxtwy9ld54i7f5tndipdc4d82bxwhf666kvlyfjvj9tvlac0c96mb3jiqnrl0dpt8zcxp2msegp4un8dmn4ulhzexbqkx9jsoy6ek37p8qituuaqtgxh53imk63bex2cf2fqu8nchnmd2jorv7x72680u0g6wuafrdi8uvqtwudnrwk3gaalg8fgjvefyyl0sdguxavkev5x0z8b0u5ebaw6x769jt45y8xn8ntgj3r32bmmld4r53lxwjzzwjliga1r7vg61053n1ylmwb8xkiol9n392999hw9ixazn91cuexyt68mjn7lqoy1yvzh735koscuwxibd5zy7xyf68ymt865n289cdunslemhu3ft3o3pl99a6gyum5burudavn4qh6mgc0grff3nhsnicd9d8t3l93n8icas61hry723fpsj381gclsb9bv247owgo4s3gqfqj6hcqzasa76ifd6u8uexg0utj5w47abygov0bowajj8ufx1cu2wi0sxzq9qvubdwfbv3go5lzoz7bnt9jawrn51l4zxt2qn36cisbsenzdjb6s6ig5fxou3r9ishzul9tiauqtzygftzpdfmqrct1xxplmxpwkuy 00:09:04.485 05:07:54 -- dd/uring.sh@42 -- # echo zkalwe2t3o08duwkemtyspyq3zu5627vxxka6lpk17zi0lhxmmyym05ae9sd6myvtlwbqokhf61fqgsvwcptdvr8l08d22og87u8jc4z4qr8is74jf8g5x1x5q3950u9f7yg4t5sbrchh2rgegbaabixtzfhbsjpdx0a1wnggek09rb42f5rvpr1urrhayeprikspvcb9lc83y0y02aroxo8715wlzzgdbsw6g1p83te8c3pjmnne0bnu91cn6h8in8z1mq92g3483tjjigeufbzhx6v4n2mlnnhlgalnpklfi3d76byzv2omrk1xrlcngt2gxbhe1ov53jlsomc1f2xwysslwunf23o564ehfys4m9ixms6rxch1au9dvqe2un68xxwtujm3y7g9fz42euxtwy9ld54i7f5tndipdc4d82bxwhf666kvlyfjvj9tvlac0c96mb3jiqnrl0dpt8zcxp2msegp4un8dmn4ulhzexbqkx9jsoy6ek37p8qituuaqtgxh53imk63bex2cf2fqu8nchnmd2jorv7x72680u0g6wuafrdi8uvqtwudnrwk3gaalg8fgjvefyyl0sdguxavkev5x0z8b0u5ebaw6x769jt45y8xn8ntgj3r32bmmld4r53lxwjzzwjliga1r7vg61053n1ylmwb8xkiol9n392999hw9ixazn91cuexyt68mjn7lqoy1yvzh735koscuwxibd5zy7xyf68ymt865n289cdunslemhu3ft3o3pl99a6gyum5burudavn4qh6mgc0grff3nhsnicd9d8t3l93n8icas61hry723fpsj381gclsb9bv247owgo4s3gqfqj6hcqzasa76ifd6u8uexg0utj5w47abygov0bowajj8ufx1cu2wi0sxzq9qvubdwfbv3go5lzoz7bnt9jawrn51l4zxt2qn36cisbsenzdjb6s6ig5fxou3r9ishzul9tiauqtzygftzpdfmqrct1xxplmxpwkuy 00:09:04.485 05:07:54 -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:09:04.485 [2024-12-08 05:07:54.223061] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:04.485 [2024-12-08 05:07:54.223756] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71098 ] 00:09:04.743 [2024-12-08 05:07:54.364898] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.743 [2024-12-08 05:07:54.404896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.309  [2024-12-08T05:07:55.353Z] Copying: 511/511 [MB] (average 1458 MBps) 00:09:05.567 00:09:05.567 05:07:55 -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:09:05.567 05:07:55 -- dd/uring.sh@54 -- # gen_conf 00:09:05.567 05:07:55 -- dd/common.sh@31 -- # xtrace_disable 00:09:05.567 05:07:55 -- common/autotest_common.sh@10 -- # set +x 00:09:05.567 [2024-12-08 05:07:55.201116] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:05.567 [2024-12-08 05:07:55.201366] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71107 ] 00:09:05.567 { 00:09:05.567 "subsystems": [ 00:09:05.567 { 00:09:05.567 "subsystem": "bdev", 00:09:05.567 "config": [ 00:09:05.567 { 00:09:05.567 "params": { 00:09:05.567 "block_size": 512, 00:09:05.567 "num_blocks": 1048576, 00:09:05.567 "name": "malloc0" 00:09:05.567 }, 00:09:05.567 "method": "bdev_malloc_create" 00:09:05.567 }, 00:09:05.567 { 00:09:05.567 "params": { 00:09:05.567 "filename": "/dev/zram1", 00:09:05.567 "name": "uring0" 00:09:05.567 }, 00:09:05.567 "method": "bdev_uring_create" 00:09:05.567 }, 00:09:05.567 { 00:09:05.567 "method": "bdev_wait_for_examine" 00:09:05.567 } 00:09:05.567 ] 00:09:05.567 } 00:09:05.567 ] 00:09:05.567 } 00:09:05.567 [2024-12-08 05:07:55.342082] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.825 [2024-12-08 05:07:55.384072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.198  [2024-12-08T05:07:57.553Z] Copying: 178/512 [MB] (178 MBps) [2024-12-08T05:07:58.488Z] Copying: 372/512 [MB] (194 MBps) [2024-12-08T05:07:58.747Z] Copying: 512/512 [MB] (average 188 MBps) 00:09:08.961 00:09:08.961 05:07:58 -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:09:08.961 05:07:58 -- dd/uring.sh@60 -- # gen_conf 00:09:08.961 05:07:58 -- dd/common.sh@31 -- # xtrace_disable 00:09:08.961 05:07:58 -- common/autotest_common.sh@10 -- # set +x 00:09:08.961 [2024-12-08 05:07:58.569938] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:08.961 [2024-12-08 05:07:58.570043] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71155 ] 00:09:08.961 { 00:09:08.961 "subsystems": [ 00:09:08.961 { 00:09:08.961 "subsystem": "bdev", 00:09:08.961 "config": [ 00:09:08.961 { 00:09:08.961 "params": { 00:09:08.961 "block_size": 512, 00:09:08.961 "num_blocks": 1048576, 00:09:08.961 "name": "malloc0" 00:09:08.961 }, 00:09:08.961 "method": "bdev_malloc_create" 00:09:08.961 }, 00:09:08.961 { 00:09:08.961 "params": { 00:09:08.961 "filename": "/dev/zram1", 00:09:08.961 "name": "uring0" 00:09:08.961 }, 00:09:08.961 "method": "bdev_uring_create" 00:09:08.961 }, 00:09:08.961 { 00:09:08.961 "method": "bdev_wait_for_examine" 00:09:08.961 } 00:09:08.961 ] 00:09:08.961 } 00:09:08.961 ] 00:09:08.961 } 00:09:08.961 [2024-12-08 05:07:58.705086] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.961 [2024-12-08 05:07:58.744488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.336  [2024-12-08T05:08:01.069Z] Copying: 138/512 [MB] (138 MBps) [2024-12-08T05:08:02.012Z] Copying: 269/512 [MB] (131 MBps) [2024-12-08T05:08:02.952Z] Copying: 388/512 [MB] (118 MBps) [2024-12-08T05:08:03.210Z] Copying: 512/512 [MB] (average 131 MBps) 00:09:13.424 00:09:13.424 05:08:03 -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:09:13.425 05:08:03 -- dd/uring.sh@66 -- # [[ zkalwe2t3o08duwkemtyspyq3zu5627vxxka6lpk17zi0lhxmmyym05ae9sd6myvtlwbqokhf61fqgsvwcptdvr8l08d22og87u8jc4z4qr8is74jf8g5x1x5q3950u9f7yg4t5sbrchh2rgegbaabixtzfhbsjpdx0a1wnggek09rb42f5rvpr1urrhayeprikspvcb9lc83y0y02aroxo8715wlzzgdbsw6g1p83te8c3pjmnne0bnu91cn6h8in8z1mq92g3483tjjigeufbzhx6v4n2mlnnhlgalnpklfi3d76byzv2omrk1xrlcngt2gxbhe1ov53jlsomc1f2xwysslwunf23o564ehfys4m9ixms6rxch1au9dvqe2un68xxwtujm3y7g9fz42euxtwy9ld54i7f5tndipdc4d82bxwhf666kvlyfjvj9tvlac0c96mb3jiqnrl0dpt8zcxp2msegp4un8dmn4ulhzexbqkx9jsoy6ek37p8qituuaqtgxh53imk63bex2cf2fqu8nchnmd2jorv7x72680u0g6wuafrdi8uvqtwudnrwk3gaalg8fgjvefyyl0sdguxavkev5x0z8b0u5ebaw6x769jt45y8xn8ntgj3r32bmmld4r53lxwjzzwjliga1r7vg61053n1ylmwb8xkiol9n392999hw9ixazn91cuexyt68mjn7lqoy1yvzh735koscuwxibd5zy7xyf68ymt865n289cdunslemhu3ft3o3pl99a6gyum5burudavn4qh6mgc0grff3nhsnicd9d8t3l93n8icas61hry723fpsj381gclsb9bv247owgo4s3gqfqj6hcqzasa76ifd6u8uexg0utj5w47abygov0bowajj8ufx1cu2wi0sxzq9qvubdwfbv3go5lzoz7bnt9jawrn51l4zxt2qn36cisbsenzdjb6s6ig5fxou3r9ishzul9tiauqtzygftzpdfmqrct1xxplmxpwkuy == \z\k\a\l\w\e\2\t\3\o\0\8\d\u\w\k\e\m\t\y\s\p\y\q\3\z\u\5\6\2\7\v\x\x\k\a\6\l\p\k\1\7\z\i\0\l\h\x\m\m\y\y\m\0\5\a\e\9\s\d\6\m\y\v\t\l\w\b\q\o\k\h\f\6\1\f\q\g\s\v\w\c\p\t\d\v\r\8\l\0\8\d\2\2\o\g\8\7\u\8\j\c\4\z\4\q\r\8\i\s\7\4\j\f\8\g\5\x\1\x\5\q\3\9\5\0\u\9\f\7\y\g\4\t\5\s\b\r\c\h\h\2\r\g\e\g\b\a\a\b\i\x\t\z\f\h\b\s\j\p\d\x\0\a\1\w\n\g\g\e\k\0\9\r\b\4\2\f\5\r\v\p\r\1\u\r\r\h\a\y\e\p\r\i\k\s\p\v\c\b\9\l\c\8\3\y\0\y\0\2\a\r\o\x\o\8\7\1\5\w\l\z\z\g\d\b\s\w\6\g\1\p\8\3\t\e\8\c\3\p\j\m\n\n\e\0\b\n\u\9\1\c\n\6\h\8\i\n\8\z\1\m\q\9\2\g\3\4\8\3\t\j\j\i\g\e\u\f\b\z\h\x\6\v\4\n\2\m\l\n\n\h\l\g\a\l\n\p\k\l\f\i\3\d\7\6\b\y\z\v\2\o\m\r\k\1\x\r\l\c\n\g\t\2\g\x\b\h\e\1\o\v\5\3\j\l\s\o\m\c\1\f\2\x\w\y\s\s\l\w\u\n\f\2\3\o\5\6\4\e\h\f\y\s\4\m\9\i\x\m\s\6\r\x\c\h\1\a\u\9\d\v\q\e\2\u\n\6\8\x\x\w\t\u\j\m\3\y\7\g\9\f\z\4\2\e\u\x\t\w\y\9\l\d\5\4\i\7\f\5\t\n\d\i\p\d\c\4\d\8\2\b\x\w\h\f\6\6\6\k\v\l\y\f\j\v\j\9\t\v\l\a\c\0\c\9\6\m\b\3\j\i\q\n\r\l\0\d\p\t\8\z\c\x\p\2\m\s\e\g\p\4\u\n\8\d\m\n\4\u\l\h\z\e\x\b\q\k\x\9\j\s\o\y\6\e\k\3\7\p\8\q\i\t\u\u\a\q\t\g\x\h\5\3\i\m\k\6\3\b\e\x\2\c\f\2\f\q\u\8\n\c\h\n\m\d\2\j\o\r\v\7\x\7\2\6\8\0\u\0\g\6\w\u\a\f\r\d\i\8\u\v\q\t\w\u\d\n\r\w\k\3\g\a\a\l\g\8\f\g\j\v\e\f\y\y\l\0\s\d\g\u\x\a\v\k\e\v\5\x\0\z\8\b\0\u\5\e\b\a\w\6\x\7\6\9\j\t\4\5\y\8\x\n\8\n\t\g\j\3\r\3\2\b\m\m\l\d\4\r\5\3\l\x\w\j\z\z\w\j\l\i\g\a\1\r\7\v\g\6\1\0\5\3\n\1\y\l\m\w\b\8\x\k\i\o\l\9\n\3\9\2\9\9\9\h\w\9\i\x\a\z\n\9\1\c\u\e\x\y\t\6\8\m\j\n\7\l\q\o\y\1\y\v\z\h\7\3\5\k\o\s\c\u\w\x\i\b\d\5\z\y\7\x\y\f\6\8\y\m\t\8\6\5\n\2\8\9\c\d\u\n\s\l\e\m\h\u\3\f\t\3\o\3\p\l\9\9\a\6\g\y\u\m\5\b\u\r\u\d\a\v\n\4\q\h\6\m\g\c\0\g\r\f\f\3\n\h\s\n\i\c\d\9\d\8\t\3\l\9\3\n\8\i\c\a\s\6\1\h\r\y\7\2\3\f\p\s\j\3\8\1\g\c\l\s\b\9\b\v\2\4\7\o\w\g\o\4\s\3\g\q\f\q\j\6\h\c\q\z\a\s\a\7\6\i\f\d\6\u\8\u\e\x\g\0\u\t\j\5\w\4\7\a\b\y\g\o\v\0\b\o\w\a\j\j\8\u\f\x\1\c\u\2\w\i\0\s\x\z\q\9\q\v\u\b\d\w\f\b\v\3\g\o\5\l\z\o\z\7\b\n\t\9\j\a\w\r\n\5\1\l\4\z\x\t\2\q\n\3\6\c\i\s\b\s\e\n\z\d\j\b\6\s\6\i\g\5\f\x\o\u\3\r\9\i\s\h\z\u\l\9\t\i\a\u\q\t\z\y\g\f\t\z\p\d\f\m\q\r\c\t\1\x\x\p\l\m\x\p\w\k\u\y ]] 00:09:13.425 05:08:03 -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:09:13.425 05:08:03 -- dd/uring.sh@69 -- # [[ zkalwe2t3o08duwkemtyspyq3zu5627vxxka6lpk17zi0lhxmmyym05ae9sd6myvtlwbqokhf61fqgsvwcptdvr8l08d22og87u8jc4z4qr8is74jf8g5x1x5q3950u9f7yg4t5sbrchh2rgegbaabixtzfhbsjpdx0a1wnggek09rb42f5rvpr1urrhayeprikspvcb9lc83y0y02aroxo8715wlzzgdbsw6g1p83te8c3pjmnne0bnu91cn6h8in8z1mq92g3483tjjigeufbzhx6v4n2mlnnhlgalnpklfi3d76byzv2omrk1xrlcngt2gxbhe1ov53jlsomc1f2xwysslwunf23o564ehfys4m9ixms6rxch1au9dvqe2un68xxwtujm3y7g9fz42euxtwy9ld54i7f5tndipdc4d82bxwhf666kvlyfjvj9tvlac0c96mb3jiqnrl0dpt8zcxp2msegp4un8dmn4ulhzexbqkx9jsoy6ek37p8qituuaqtgxh53imk63bex2cf2fqu8nchnmd2jorv7x72680u0g6wuafrdi8uvqtwudnrwk3gaalg8fgjvefyyl0sdguxavkev5x0z8b0u5ebaw6x769jt45y8xn8ntgj3r32bmmld4r53lxwjzzwjliga1r7vg61053n1ylmwb8xkiol9n392999hw9ixazn91cuexyt68mjn7lqoy1yvzh735koscuwxibd5zy7xyf68ymt865n289cdunslemhu3ft3o3pl99a6gyum5burudavn4qh6mgc0grff3nhsnicd9d8t3l93n8icas61hry723fpsj381gclsb9bv247owgo4s3gqfqj6hcqzasa76ifd6u8uexg0utj5w47abygov0bowajj8ufx1cu2wi0sxzq9qvubdwfbv3go5lzoz7bnt9jawrn51l4zxt2qn36cisbsenzdjb6s6ig5fxou3r9ishzul9tiauqtzygftzpdfmqrct1xxplmxpwkuy == \z\k\a\l\w\e\2\t\3\o\0\8\d\u\w\k\e\m\t\y\s\p\y\q\3\z\u\5\6\2\7\v\x\x\k\a\6\l\p\k\1\7\z\i\0\l\h\x\m\m\y\y\m\0\5\a\e\9\s\d\6\m\y\v\t\l\w\b\q\o\k\h\f\6\1\f\q\g\s\v\w\c\p\t\d\v\r\8\l\0\8\d\2\2\o\g\8\7\u\8\j\c\4\z\4\q\r\8\i\s\7\4\j\f\8\g\5\x\1\x\5\q\3\9\5\0\u\9\f\7\y\g\4\t\5\s\b\r\c\h\h\2\r\g\e\g\b\a\a\b\i\x\t\z\f\h\b\s\j\p\d\x\0\a\1\w\n\g\g\e\k\0\9\r\b\4\2\f\5\r\v\p\r\1\u\r\r\h\a\y\e\p\r\i\k\s\p\v\c\b\9\l\c\8\3\y\0\y\0\2\a\r\o\x\o\8\7\1\5\w\l\z\z\g\d\b\s\w\6\g\1\p\8\3\t\e\8\c\3\p\j\m\n\n\e\0\b\n\u\9\1\c\n\6\h\8\i\n\8\z\1\m\q\9\2\g\3\4\8\3\t\j\j\i\g\e\u\f\b\z\h\x\6\v\4\n\2\m\l\n\n\h\l\g\a\l\n\p\k\l\f\i\3\d\7\6\b\y\z\v\2\o\m\r\k\1\x\r\l\c\n\g\t\2\g\x\b\h\e\1\o\v\5\3\j\l\s\o\m\c\1\f\2\x\w\y\s\s\l\w\u\n\f\2\3\o\5\6\4\e\h\f\y\s\4\m\9\i\x\m\s\6\r\x\c\h\1\a\u\9\d\v\q\e\2\u\n\6\8\x\x\w\t\u\j\m\3\y\7\g\9\f\z\4\2\e\u\x\t\w\y\9\l\d\5\4\i\7\f\5\t\n\d\i\p\d\c\4\d\8\2\b\x\w\h\f\6\6\6\k\v\l\y\f\j\v\j\9\t\v\l\a\c\0\c\9\6\m\b\3\j\i\q\n\r\l\0\d\p\t\8\z\c\x\p\2\m\s\e\g\p\4\u\n\8\d\m\n\4\u\l\h\z\e\x\b\q\k\x\9\j\s\o\y\6\e\k\3\7\p\8\q\i\t\u\u\a\q\t\g\x\h\5\3\i\m\k\6\3\b\e\x\2\c\f\2\f\q\u\8\n\c\h\n\m\d\2\j\o\r\v\7\x\7\2\6\8\0\u\0\g\6\w\u\a\f\r\d\i\8\u\v\q\t\w\u\d\n\r\w\k\3\g\a\a\l\g\8\f\g\j\v\e\f\y\y\l\0\s\d\g\u\x\a\v\k\e\v\5\x\0\z\8\b\0\u\5\e\b\a\w\6\x\7\6\9\j\t\4\5\y\8\x\n\8\n\t\g\j\3\r\3\2\b\m\m\l\d\4\r\5\3\l\x\w\j\z\z\w\j\l\i\g\a\1\r\7\v\g\6\1\0\5\3\n\1\y\l\m\w\b\8\x\k\i\o\l\9\n\3\9\2\9\9\9\h\w\9\i\x\a\z\n\9\1\c\u\e\x\y\t\6\8\m\j\n\7\l\q\o\y\1\y\v\z\h\7\3\5\k\o\s\c\u\w\x\i\b\d\5\z\y\7\x\y\f\6\8\y\m\t\8\6\5\n\2\8\9\c\d\u\n\s\l\e\m\h\u\3\f\t\3\o\3\p\l\9\9\a\6\g\y\u\m\5\b\u\r\u\d\a\v\n\4\q\h\6\m\g\c\0\g\r\f\f\3\n\h\s\n\i\c\d\9\d\8\t\3\l\9\3\n\8\i\c\a\s\6\1\h\r\y\7\2\3\f\p\s\j\3\8\1\g\c\l\s\b\9\b\v\2\4\7\o\w\g\o\4\s\3\g\q\f\q\j\6\h\c\q\z\a\s\a\7\6\i\f\d\6\u\8\u\e\x\g\0\u\t\j\5\w\4\7\a\b\y\g\o\v\0\b\o\w\a\j\j\8\u\f\x\1\c\u\2\w\i\0\s\x\z\q\9\q\v\u\b\d\w\f\b\v\3\g\o\5\l\z\o\z\7\b\n\t\9\j\a\w\r\n\5\1\l\4\z\x\t\2\q\n\3\6\c\i\s\b\s\e\n\z\d\j\b\6\s\6\i\g\5\f\x\o\u\3\r\9\i\s\h\z\u\l\9\t\i\a\u\q\t\z\y\g\f\t\z\p\d\f\m\q\r\c\t\1\x\x\p\l\m\x\p\w\k\u\y ]] 00:09:13.425 05:08:03 -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:09:13.683 05:08:03 -- dd/uring.sh@75 -- # gen_conf 00:09:13.683 05:08:03 -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:09:13.683 05:08:03 -- dd/common.sh@31 -- # xtrace_disable 00:09:13.683 05:08:03 -- common/autotest_common.sh@10 -- # set +x 00:09:13.941 [2024-12-08 05:08:03.479506] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:13.941 [2024-12-08 05:08:03.480206] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71228 ] 00:09:13.941 { 00:09:13.941 "subsystems": [ 00:09:13.941 { 00:09:13.941 "subsystem": "bdev", 00:09:13.941 "config": [ 00:09:13.941 { 00:09:13.941 "params": { 00:09:13.941 "block_size": 512, 00:09:13.941 "num_blocks": 1048576, 00:09:13.941 "name": "malloc0" 00:09:13.941 }, 00:09:13.941 "method": "bdev_malloc_create" 00:09:13.941 }, 00:09:13.941 { 00:09:13.941 "params": { 00:09:13.941 "filename": "/dev/zram1", 00:09:13.941 "name": "uring0" 00:09:13.941 }, 00:09:13.941 "method": "bdev_uring_create" 00:09:13.941 }, 00:09:13.941 { 00:09:13.941 "method": "bdev_wait_for_examine" 00:09:13.941 } 00:09:13.941 ] 00:09:13.941 } 00:09:13.941 ] 00:09:13.941 } 00:09:13.941 [2024-12-08 05:08:03.617959] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.941 [2024-12-08 05:08:03.659588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.315  [2024-12-08T05:08:06.032Z] Copying: 137/512 [MB] (137 MBps) [2024-12-08T05:08:06.965Z] Copying: 275/512 [MB] (137 MBps) [2024-12-08T05:08:07.899Z] Copying: 404/512 [MB] (129 MBps) [2024-12-08T05:08:07.899Z] Copying: 512/512 [MB] (average 136 MBps) 00:09:18.113 00:09:18.113 05:08:07 -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:09:18.113 05:08:07 -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:09:18.113 05:08:07 -- dd/uring.sh@87 -- # : 00:09:18.113 05:08:07 -- dd/uring.sh@87 -- # : 00:09:18.113 05:08:07 -- dd/uring.sh@87 -- # gen_conf 00:09:18.113 05:08:07 -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:09:18.113 05:08:07 -- dd/common.sh@31 -- # xtrace_disable 00:09:18.114 05:08:07 -- common/autotest_common.sh@10 -- # set +x 00:09:18.114 [2024-12-08 05:08:07.854159] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:18.114 [2024-12-08 05:08:07.855216] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71292 ] 00:09:18.114 { 00:09:18.114 "subsystems": [ 00:09:18.114 { 00:09:18.114 "subsystem": "bdev", 00:09:18.114 "config": [ 00:09:18.114 { 00:09:18.114 "params": { 00:09:18.114 "block_size": 512, 00:09:18.114 "num_blocks": 1048576, 00:09:18.114 "name": "malloc0" 00:09:18.114 }, 00:09:18.114 "method": "bdev_malloc_create" 00:09:18.114 }, 00:09:18.114 { 00:09:18.114 "params": { 00:09:18.114 "filename": "/dev/zram1", 00:09:18.114 "name": "uring0" 00:09:18.114 }, 00:09:18.114 "method": "bdev_uring_create" 00:09:18.114 }, 00:09:18.114 { 00:09:18.114 "params": { 00:09:18.114 "name": "uring0" 00:09:18.114 }, 00:09:18.114 "method": "bdev_uring_delete" 00:09:18.114 }, 00:09:18.114 { 00:09:18.114 "method": "bdev_wait_for_examine" 00:09:18.114 } 00:09:18.114 ] 00:09:18.114 } 00:09:18.114 ] 00:09:18.114 } 00:09:18.372 [2024-12-08 05:08:07.996156] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.372 [2024-12-08 05:08:08.035990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.630  [2024-12-08T05:08:08.673Z] Copying: 0/0 [B] (average 0 Bps) 00:09:18.887 00:09:18.887 05:08:08 -- dd/uring.sh@94 -- # : 00:09:18.887 05:08:08 -- dd/uring.sh@94 -- # gen_conf 00:09:18.887 05:08:08 -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:18.887 05:08:08 -- dd/common.sh@31 -- # xtrace_disable 00:09:18.887 05:08:08 -- common/autotest_common.sh@650 -- # local es=0 00:09:18.887 05:08:08 -- common/autotest_common.sh@10 -- # set +x 00:09:18.887 05:08:08 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:18.887 05:08:08 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:18.887 05:08:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:18.887 05:08:08 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:18.887 05:08:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:18.887 05:08:08 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:18.887 05:08:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:18.887 05:08:08 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:18.887 05:08:08 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:18.887 05:08:08 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:18.887 [2024-12-08 05:08:08.517638] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:18.887 [2024-12-08 05:08:08.517760] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71320 ] 00:09:18.887 { 00:09:18.887 "subsystems": [ 00:09:18.887 { 00:09:18.887 "subsystem": "bdev", 00:09:18.887 "config": [ 00:09:18.887 { 00:09:18.887 "params": { 00:09:18.887 "block_size": 512, 00:09:18.887 "num_blocks": 1048576, 00:09:18.887 "name": "malloc0" 00:09:18.887 }, 00:09:18.887 "method": "bdev_malloc_create" 00:09:18.887 }, 00:09:18.887 { 00:09:18.887 "params": { 00:09:18.887 "filename": "/dev/zram1", 00:09:18.887 "name": "uring0" 00:09:18.887 }, 00:09:18.887 "method": "bdev_uring_create" 00:09:18.887 }, 00:09:18.887 { 00:09:18.887 "params": { 00:09:18.887 "name": "uring0" 00:09:18.887 }, 00:09:18.887 "method": "bdev_uring_delete" 00:09:18.887 }, 00:09:18.887 { 00:09:18.887 "method": "bdev_wait_for_examine" 00:09:18.887 } 00:09:18.887 ] 00:09:18.887 } 00:09:18.887 ] 00:09:18.887 } 00:09:18.887 [2024-12-08 05:08:08.662540] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.145 [2024-12-08 05:08:08.703838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.145 [2024-12-08 05:08:08.865147] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:09:19.145 [2024-12-08 05:08:08.865215] spdk_dd.c: 932:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:09:19.145 [2024-12-08 05:08:08.865230] spdk_dd.c:1074:dd_run: *ERROR*: uring0: No such device 00:09:19.145 [2024-12-08 05:08:08.865243] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:19.403 [2024-12-08 05:08:09.043396] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:09:19.403 05:08:09 -- common/autotest_common.sh@653 -- # es=237 00:09:19.403 05:08:09 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:19.403 05:08:09 -- common/autotest_common.sh@662 -- # es=109 00:09:19.403 05:08:09 -- common/autotest_common.sh@663 -- # case "$es" in 00:09:19.403 05:08:09 -- common/autotest_common.sh@670 -- # es=1 00:09:19.403 05:08:09 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:19.403 05:08:09 -- dd/uring.sh@99 -- # remove_zram_dev 1 00:09:19.403 05:08:09 -- dd/common.sh@172 -- # local id=1 00:09:19.403 05:08:09 -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:09:19.403 05:08:09 -- dd/common.sh@176 -- # echo 1 00:09:19.403 05:08:09 -- dd/common.sh@177 -- # echo 1 00:09:19.403 05:08:09 -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:09:19.659 00:09:19.659 ************************************ 00:09:19.659 END TEST dd_uring_copy 00:09:19.659 ************************************ 00:09:19.659 real 0m15.267s 00:09:19.659 user 0m8.728s 00:09:19.659 sys 0m5.902s 00:09:19.659 05:08:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:19.659 05:08:09 -- common/autotest_common.sh@10 -- # set +x 00:09:19.659 ************************************ 00:09:19.659 END TEST spdk_dd_uring 00:09:19.659 ************************************ 00:09:19.659 00:09:19.659 real 0m15.514s 00:09:19.659 user 0m8.869s 00:09:19.659 sys 0m6.008s 00:09:19.659 05:08:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:19.659 05:08:09 -- common/autotest_common.sh@10 -- # set +x 00:09:19.917 05:08:09 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:09:19.917 05:08:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:19.917 05:08:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:19.917 05:08:09 -- common/autotest_common.sh@10 -- # set +x 00:09:19.917 ************************************ 00:09:19.917 START TEST spdk_dd_sparse 00:09:19.917 ************************************ 00:09:19.917 05:08:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:09:19.917 * Looking for test storage... 00:09:19.917 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:19.917 05:08:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:19.917 05:08:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:19.917 05:08:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:19.917 05:08:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:19.917 05:08:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:19.917 05:08:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:19.917 05:08:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:19.917 05:08:09 -- scripts/common.sh@335 -- # IFS=.-: 00:09:19.917 05:08:09 -- scripts/common.sh@335 -- # read -ra ver1 00:09:19.917 05:08:09 -- scripts/common.sh@336 -- # IFS=.-: 00:09:19.917 05:08:09 -- scripts/common.sh@336 -- # read -ra ver2 00:09:19.917 05:08:09 -- scripts/common.sh@337 -- # local 'op=<' 00:09:19.917 05:08:09 -- scripts/common.sh@339 -- # ver1_l=2 00:09:19.917 05:08:09 -- scripts/common.sh@340 -- # ver2_l=1 00:09:19.917 05:08:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:19.917 05:08:09 -- scripts/common.sh@343 -- # case "$op" in 00:09:19.917 05:08:09 -- scripts/common.sh@344 -- # : 1 00:09:19.917 05:08:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:19.918 05:08:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:19.918 05:08:09 -- scripts/common.sh@364 -- # decimal 1 00:09:19.918 05:08:09 -- scripts/common.sh@352 -- # local d=1 00:09:19.918 05:08:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:19.918 05:08:09 -- scripts/common.sh@354 -- # echo 1 00:09:19.918 05:08:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:19.918 05:08:09 -- scripts/common.sh@365 -- # decimal 2 00:09:19.918 05:08:09 -- scripts/common.sh@352 -- # local d=2 00:09:19.918 05:08:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:19.918 05:08:09 -- scripts/common.sh@354 -- # echo 2 00:09:19.918 05:08:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:19.918 05:08:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:19.918 05:08:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:19.918 05:08:09 -- scripts/common.sh@367 -- # return 0 00:09:19.918 05:08:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:19.918 05:08:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:19.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.918 --rc genhtml_branch_coverage=1 00:09:19.918 --rc genhtml_function_coverage=1 00:09:19.918 --rc genhtml_legend=1 00:09:19.918 --rc geninfo_all_blocks=1 00:09:19.918 --rc geninfo_unexecuted_blocks=1 00:09:19.918 00:09:19.918 ' 00:09:19.918 05:08:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:19.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.918 --rc genhtml_branch_coverage=1 00:09:19.918 --rc genhtml_function_coverage=1 00:09:19.918 --rc genhtml_legend=1 00:09:19.918 --rc geninfo_all_blocks=1 00:09:19.918 --rc geninfo_unexecuted_blocks=1 00:09:19.918 00:09:19.918 ' 00:09:19.918 05:08:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:19.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.918 --rc genhtml_branch_coverage=1 00:09:19.918 --rc genhtml_function_coverage=1 00:09:19.918 --rc genhtml_legend=1 00:09:19.918 --rc geninfo_all_blocks=1 00:09:19.918 --rc geninfo_unexecuted_blocks=1 00:09:19.918 00:09:19.918 ' 00:09:19.918 05:08:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:19.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.918 --rc genhtml_branch_coverage=1 00:09:19.918 --rc genhtml_function_coverage=1 00:09:19.918 --rc genhtml_legend=1 00:09:19.918 --rc geninfo_all_blocks=1 00:09:19.918 --rc geninfo_unexecuted_blocks=1 00:09:19.918 00:09:19.918 ' 00:09:19.918 05:08:09 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:19.918 05:08:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:19.918 05:08:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:19.918 05:08:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:19.918 05:08:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.918 05:08:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.918 05:08:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.918 05:08:09 -- paths/export.sh@5 -- # export PATH 00:09:19.918 05:08:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.918 05:08:09 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:09:19.918 05:08:09 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:09:19.918 05:08:09 -- dd/sparse.sh@110 -- # file1=file_zero1 00:09:19.918 05:08:09 -- dd/sparse.sh@111 -- # file2=file_zero2 00:09:19.918 05:08:09 -- dd/sparse.sh@112 -- # file3=file_zero3 00:09:19.918 05:08:09 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:09:19.918 05:08:09 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:09:19.918 05:08:09 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:09:19.918 05:08:09 -- dd/sparse.sh@118 -- # prepare 00:09:19.918 05:08:09 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:09:19.918 05:08:09 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:09:19.918 1+0 records in 00:09:19.918 1+0 records out 00:09:19.918 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00911663 s, 460 MB/s 00:09:19.918 05:08:09 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:09:19.918 1+0 records in 00:09:19.918 1+0 records out 00:09:19.918 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00599287 s, 700 MB/s 00:09:19.918 05:08:09 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:09:19.918 1+0 records in 00:09:19.918 1+0 records out 00:09:19.918 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00540846 s, 776 MB/s 00:09:20.177 05:08:09 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:09:20.177 05:08:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:20.177 05:08:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:20.177 05:08:09 -- common/autotest_common.sh@10 -- # set +x 00:09:20.177 ************************************ 00:09:20.177 START TEST dd_sparse_file_to_file 00:09:20.177 ************************************ 00:09:20.177 05:08:09 -- common/autotest_common.sh@1114 -- # file_to_file 00:09:20.177 05:08:09 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:09:20.177 05:08:09 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:09:20.177 05:08:09 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:20.177 05:08:09 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:09:20.177 05:08:09 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:09:20.177 05:08:09 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:09:20.177 05:08:09 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:09:20.177 05:08:09 -- dd/sparse.sh@41 -- # gen_conf 00:09:20.177 05:08:09 -- dd/common.sh@31 -- # xtrace_disable 00:09:20.177 05:08:09 -- common/autotest_common.sh@10 -- # set +x 00:09:20.177 [2024-12-08 05:08:09.759527] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:20.177 [2024-12-08 05:08:09.759665] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71413 ] 00:09:20.177 { 00:09:20.177 "subsystems": [ 00:09:20.177 { 00:09:20.177 "subsystem": "bdev", 00:09:20.177 "config": [ 00:09:20.177 { 00:09:20.177 "params": { 00:09:20.177 "block_size": 4096, 00:09:20.177 "filename": "dd_sparse_aio_disk", 00:09:20.177 "name": "dd_aio" 00:09:20.177 }, 00:09:20.177 "method": "bdev_aio_create" 00:09:20.177 }, 00:09:20.177 { 00:09:20.177 "params": { 00:09:20.177 "lvs_name": "dd_lvstore", 00:09:20.177 "bdev_name": "dd_aio" 00:09:20.177 }, 00:09:20.177 "method": "bdev_lvol_create_lvstore" 00:09:20.177 }, 00:09:20.177 { 00:09:20.177 "method": "bdev_wait_for_examine" 00:09:20.177 } 00:09:20.177 ] 00:09:20.177 } 00:09:20.177 ] 00:09:20.177 } 00:09:20.177 [2024-12-08 05:08:09.900186] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.177 [2024-12-08 05:08:09.939731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.436  [2024-12-08T05:08:10.480Z] Copying: 12/36 [MB] (average 1500 MBps) 00:09:20.694 00:09:20.695 05:08:10 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:09:20.695 05:08:10 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:09:20.695 05:08:10 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:09:20.695 05:08:10 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:09:20.695 05:08:10 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:09:20.695 05:08:10 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:09:20.695 05:08:10 -- dd/sparse.sh@52 -- # stat1_b=24576 00:09:20.695 05:08:10 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:09:20.695 ************************************ 00:09:20.695 END TEST dd_sparse_file_to_file 00:09:20.695 ************************************ 00:09:20.695 05:08:10 -- dd/sparse.sh@53 -- # stat2_b=24576 00:09:20.695 05:08:10 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:09:20.695 00:09:20.695 real 0m0.556s 00:09:20.695 user 0m0.295s 00:09:20.695 sys 0m0.141s 00:09:20.695 05:08:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:20.695 05:08:10 -- common/autotest_common.sh@10 -- # set +x 00:09:20.695 05:08:10 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:09:20.695 05:08:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:20.695 05:08:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:20.695 05:08:10 -- common/autotest_common.sh@10 -- # set +x 00:09:20.695 ************************************ 00:09:20.695 START TEST dd_sparse_file_to_bdev 00:09:20.695 ************************************ 00:09:20.695 05:08:10 -- common/autotest_common.sh@1114 -- # file_to_bdev 00:09:20.695 05:08:10 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:20.695 05:08:10 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:09:20.695 05:08:10 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size']='37748736' ['thin_provision']='true') 00:09:20.695 05:08:10 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:09:20.695 05:08:10 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:09:20.695 05:08:10 -- dd/sparse.sh@73 -- # gen_conf 00:09:20.695 05:08:10 -- dd/common.sh@31 -- # xtrace_disable 00:09:20.695 05:08:10 -- common/autotest_common.sh@10 -- # set +x 00:09:20.695 [2024-12-08 05:08:10.358784] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:20.695 [2024-12-08 05:08:10.358878] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71459 ] 00:09:20.695 { 00:09:20.695 "subsystems": [ 00:09:20.695 { 00:09:20.695 "subsystem": "bdev", 00:09:20.695 "config": [ 00:09:20.695 { 00:09:20.695 "params": { 00:09:20.695 "block_size": 4096, 00:09:20.695 "filename": "dd_sparse_aio_disk", 00:09:20.695 "name": "dd_aio" 00:09:20.695 }, 00:09:20.695 "method": "bdev_aio_create" 00:09:20.695 }, 00:09:20.695 { 00:09:20.695 "params": { 00:09:20.695 "lvs_name": "dd_lvstore", 00:09:20.695 "lvol_name": "dd_lvol", 00:09:20.695 "size": 37748736, 00:09:20.695 "thin_provision": true 00:09:20.695 }, 00:09:20.695 "method": "bdev_lvol_create" 00:09:20.695 }, 00:09:20.695 { 00:09:20.695 "method": "bdev_wait_for_examine" 00:09:20.695 } 00:09:20.695 ] 00:09:20.695 } 00:09:20.695 ] 00:09:20.695 } 00:09:20.963 [2024-12-08 05:08:10.496610] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.963 [2024-12-08 05:08:10.538041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.963 [2024-12-08 05:08:10.598856] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:09:20.963  [2024-12-08T05:08:10.749Z] Copying: 12/36 [MB] (average 600 MBps)[2024-12-08 05:08:10.635936] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:09:21.220 00:09:21.220 00:09:21.220 00:09:21.220 real 0m0.508s 00:09:21.220 user 0m0.301s 00:09:21.220 sys 0m0.118s 00:09:21.220 ************************************ 00:09:21.220 END TEST dd_sparse_file_to_bdev 00:09:21.220 ************************************ 00:09:21.220 05:08:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:21.220 05:08:10 -- common/autotest_common.sh@10 -- # set +x 00:09:21.220 05:08:10 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:09:21.220 05:08:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:21.220 05:08:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:21.220 05:08:10 -- common/autotest_common.sh@10 -- # set +x 00:09:21.220 ************************************ 00:09:21.220 START TEST dd_sparse_bdev_to_file 00:09:21.220 ************************************ 00:09:21.220 05:08:10 -- common/autotest_common.sh@1114 -- # bdev_to_file 00:09:21.220 05:08:10 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:09:21.220 05:08:10 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:09:21.220 05:08:10 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:21.220 05:08:10 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:09:21.220 05:08:10 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:09:21.220 05:08:10 -- dd/sparse.sh@91 -- # gen_conf 00:09:21.220 05:08:10 -- dd/common.sh@31 -- # xtrace_disable 00:09:21.220 05:08:10 -- common/autotest_common.sh@10 -- # set +x 00:09:21.220 [2024-12-08 05:08:10.921164] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:21.220 [2024-12-08 05:08:10.921280] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71485 ] 00:09:21.220 { 00:09:21.220 "subsystems": [ 00:09:21.220 { 00:09:21.220 "subsystem": "bdev", 00:09:21.220 "config": [ 00:09:21.220 { 00:09:21.220 "params": { 00:09:21.220 "block_size": 4096, 00:09:21.220 "filename": "dd_sparse_aio_disk", 00:09:21.220 "name": "dd_aio" 00:09:21.220 }, 00:09:21.220 "method": "bdev_aio_create" 00:09:21.221 }, 00:09:21.221 { 00:09:21.221 "method": "bdev_wait_for_examine" 00:09:21.221 } 00:09:21.221 ] 00:09:21.221 } 00:09:21.221 ] 00:09:21.221 } 00:09:21.478 [2024-12-08 05:08:11.059201] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.478 [2024-12-08 05:08:11.097853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.478  [2024-12-08T05:08:11.522Z] Copying: 12/36 [MB] (average 1090 MBps) 00:09:21.736 00:09:21.736 05:08:11 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:09:21.736 05:08:11 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:09:21.736 05:08:11 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:09:21.736 05:08:11 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:09:21.736 05:08:11 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:09:21.736 05:08:11 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:09:21.736 05:08:11 -- dd/sparse.sh@102 -- # stat2_b=24576 00:09:21.736 05:08:11 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:09:21.736 05:08:11 -- dd/sparse.sh@103 -- # stat3_b=24576 00:09:21.736 05:08:11 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:09:21.736 00:09:21.736 real 0m0.528s 00:09:21.736 user 0m0.307s 00:09:21.736 sys 0m0.143s 00:09:21.736 05:08:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:21.736 ************************************ 00:09:21.736 END TEST dd_sparse_bdev_to_file 00:09:21.736 ************************************ 00:09:21.736 05:08:11 -- common/autotest_common.sh@10 -- # set +x 00:09:21.737 05:08:11 -- dd/sparse.sh@1 -- # cleanup 00:09:21.737 05:08:11 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:09:21.737 05:08:11 -- dd/sparse.sh@12 -- # rm file_zero1 00:09:21.737 05:08:11 -- dd/sparse.sh@13 -- # rm file_zero2 00:09:21.737 05:08:11 -- dd/sparse.sh@14 -- # rm file_zero3 00:09:21.737 ************************************ 00:09:21.737 END TEST spdk_dd_sparse 00:09:21.737 ************************************ 00:09:21.737 00:09:21.737 real 0m1.973s 00:09:21.737 user 0m1.084s 00:09:21.737 sys 0m0.599s 00:09:21.737 05:08:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:21.737 05:08:11 -- common/autotest_common.sh@10 -- # set +x 00:09:21.737 05:08:11 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:09:21.737 05:08:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:21.737 05:08:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:21.737 05:08:11 -- common/autotest_common.sh@10 -- # set +x 00:09:21.737 ************************************ 00:09:21.737 START TEST spdk_dd_negative 00:09:21.737 ************************************ 00:09:21.737 05:08:11 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:09:22.012 * Looking for test storage... 00:09:22.012 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:22.012 05:08:11 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:22.012 05:08:11 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:22.012 05:08:11 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:22.012 05:08:11 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:22.012 05:08:11 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:22.012 05:08:11 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:22.012 05:08:11 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:22.012 05:08:11 -- scripts/common.sh@335 -- # IFS=.-: 00:09:22.012 05:08:11 -- scripts/common.sh@335 -- # read -ra ver1 00:09:22.012 05:08:11 -- scripts/common.sh@336 -- # IFS=.-: 00:09:22.012 05:08:11 -- scripts/common.sh@336 -- # read -ra ver2 00:09:22.012 05:08:11 -- scripts/common.sh@337 -- # local 'op=<' 00:09:22.012 05:08:11 -- scripts/common.sh@339 -- # ver1_l=2 00:09:22.012 05:08:11 -- scripts/common.sh@340 -- # ver2_l=1 00:09:22.012 05:08:11 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:22.012 05:08:11 -- scripts/common.sh@343 -- # case "$op" in 00:09:22.012 05:08:11 -- scripts/common.sh@344 -- # : 1 00:09:22.012 05:08:11 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:22.012 05:08:11 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:22.012 05:08:11 -- scripts/common.sh@364 -- # decimal 1 00:09:22.012 05:08:11 -- scripts/common.sh@352 -- # local d=1 00:09:22.013 05:08:11 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:22.013 05:08:11 -- scripts/common.sh@354 -- # echo 1 00:09:22.013 05:08:11 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:22.013 05:08:11 -- scripts/common.sh@365 -- # decimal 2 00:09:22.013 05:08:11 -- scripts/common.sh@352 -- # local d=2 00:09:22.013 05:08:11 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:22.013 05:08:11 -- scripts/common.sh@354 -- # echo 2 00:09:22.013 05:08:11 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:22.013 05:08:11 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:22.013 05:08:11 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:22.013 05:08:11 -- scripts/common.sh@367 -- # return 0 00:09:22.013 05:08:11 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:22.013 05:08:11 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:22.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.013 --rc genhtml_branch_coverage=1 00:09:22.013 --rc genhtml_function_coverage=1 00:09:22.013 --rc genhtml_legend=1 00:09:22.013 --rc geninfo_all_blocks=1 00:09:22.013 --rc geninfo_unexecuted_blocks=1 00:09:22.013 00:09:22.013 ' 00:09:22.013 05:08:11 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:22.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.013 --rc genhtml_branch_coverage=1 00:09:22.013 --rc genhtml_function_coverage=1 00:09:22.013 --rc genhtml_legend=1 00:09:22.013 --rc geninfo_all_blocks=1 00:09:22.013 --rc geninfo_unexecuted_blocks=1 00:09:22.013 00:09:22.013 ' 00:09:22.013 05:08:11 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:22.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.013 --rc genhtml_branch_coverage=1 00:09:22.013 --rc genhtml_function_coverage=1 00:09:22.013 --rc genhtml_legend=1 00:09:22.013 --rc geninfo_all_blocks=1 00:09:22.013 --rc geninfo_unexecuted_blocks=1 00:09:22.013 00:09:22.013 ' 00:09:22.013 05:08:11 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:22.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.013 --rc genhtml_branch_coverage=1 00:09:22.013 --rc genhtml_function_coverage=1 00:09:22.013 --rc genhtml_legend=1 00:09:22.013 --rc geninfo_all_blocks=1 00:09:22.013 --rc geninfo_unexecuted_blocks=1 00:09:22.013 00:09:22.013 ' 00:09:22.013 05:08:11 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:22.013 05:08:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:22.013 05:08:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:22.013 05:08:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:22.013 05:08:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.013 05:08:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.013 05:08:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.013 05:08:11 -- paths/export.sh@5 -- # export PATH 00:09:22.013 05:08:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.013 05:08:11 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:22.013 05:08:11 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:22.013 05:08:11 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:22.013 05:08:11 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:22.013 05:08:11 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:09:22.013 05:08:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:22.013 05:08:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:22.013 05:08:11 -- common/autotest_common.sh@10 -- # set +x 00:09:22.013 ************************************ 00:09:22.013 START TEST dd_invalid_arguments 00:09:22.013 ************************************ 00:09:22.013 05:08:11 -- common/autotest_common.sh@1114 -- # invalid_arguments 00:09:22.013 05:08:11 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:22.013 05:08:11 -- common/autotest_common.sh@650 -- # local es=0 00:09:22.013 05:08:11 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:22.013 05:08:11 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.013 05:08:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:22.013 05:08:11 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.013 05:08:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:22.013 05:08:11 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.013 05:08:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:22.013 05:08:11 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.013 05:08:11 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:22.013 05:08:11 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:22.013 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:09:22.013 options: 00:09:22.013 -c, --config JSON config file (default none) 00:09:22.013 --json JSON config file (default none) 00:09:22.013 --json-ignore-init-errors 00:09:22.013 don't exit on invalid config entry 00:09:22.013 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:09:22.013 -g, --single-file-segments 00:09:22.013 force creating just one hugetlbfs file 00:09:22.013 -h, --help show this usage 00:09:22.013 -i, --shm-id shared memory ID (optional) 00:09:22.013 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:09:22.013 --lcores lcore to CPU mapping list. The list is in the format: 00:09:22.013 [<,lcores[@CPUs]>...] 00:09:22.013 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:09:22.013 Within the group, '-' is used for range separator, 00:09:22.013 ',' is used for single number separator. 00:09:22.013 '( )' can be omitted for single element group, 00:09:22.013 '@' can be omitted if cpus and lcores have the same value 00:09:22.013 -n, --mem-channels channel number of memory channels used for DPDK 00:09:22.013 -p, --main-core main (primary) core for DPDK 00:09:22.013 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:09:22.013 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:09:22.013 --disable-cpumask-locks Disable CPU core lock files. 00:09:22.013 --silence-noticelog disable notice level logging to stderr 00:09:22.013 --msg-mempool-size global message memory pool size in count (default: 262143) 00:09:22.013 -u, --no-pci disable PCI access 00:09:22.013 --wait-for-rpc wait for RPCs to initialize subsystems 00:09:22.013 --max-delay maximum reactor delay (in microseconds) 00:09:22.013 -B, --pci-blocked pci addr to block (can be used more than once) 00:09:22.013 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:09:22.013 -R, --huge-unlink unlink huge files after initialization 00:09:22.013 -v, --version print SPDK version 00:09:22.013 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:09:22.013 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:09:22.013 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:09:22.013 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:09:22.013 Tracepoints vary in size and can use more than one trace entry. 00:09:22.013 --rpcs-allowed comma-separated list of permitted RPCS 00:09:22.013 --env-context Opaque context for use of the env implementation 00:09:22.013 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:09:22.013 --no-huge run without using hugepages 00:09:22.013 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, json_util, log, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, nvme_cuse, nvme_vfio, opal, reactor, rpc, rpc_client, sock, sock_posix, thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:09:22.013 -e, --tpoint-group [:] 00:09:22.013 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, all) 00:09:22.013 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:09:22.013 Groups and masks /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:09:22.014 [2024-12-08 05:08:11.745162] spdk_dd.c:1460:main: *ERROR*: Invalid arguments 00:09:22.014 can be combined (e.g. thread,bdev:0x1). 00:09:22.014 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:09:22.014 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:09:22.014 [--------- DD Options ---------] 00:09:22.014 --if Input file. Must specify either --if or --ib. 00:09:22.014 --ib Input bdev. Must specifier either --if or --ib 00:09:22.014 --of Output file. Must specify either --of or --ob. 00:09:22.014 --ob Output bdev. Must specify either --of or --ob. 00:09:22.014 --iflag Input file flags. 00:09:22.014 --oflag Output file flags. 00:09:22.014 --bs I/O unit size (default: 4096) 00:09:22.014 --qd Queue depth (default: 2) 00:09:22.014 --count I/O unit count. The number of I/O units to copy. (default: all) 00:09:22.014 --skip Skip this many I/O units at start of input. (default: 0) 00:09:22.014 --seek Skip this many I/O units at start of output. (default: 0) 00:09:22.014 --aio Force usage of AIO. (by default io_uring is used if available) 00:09:22.014 --sparse Enable hole skipping in input target 00:09:22.014 Available iflag and oflag values: 00:09:22.014 append - append mode 00:09:22.014 direct - use direct I/O for data 00:09:22.014 directory - fail unless a directory 00:09:22.014 dsync - use synchronized I/O for data 00:09:22.014 noatime - do not update access time 00:09:22.014 noctty - do not assign controlling terminal from file 00:09:22.014 nofollow - do not follow symlinks 00:09:22.014 nonblock - use non-blocking I/O 00:09:22.014 sync - use synchronized I/O for data and metadata 00:09:22.014 05:08:11 -- common/autotest_common.sh@653 -- # es=2 00:09:22.014 05:08:11 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:22.014 05:08:11 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:22.014 05:08:11 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:22.014 00:09:22.014 real 0m0.069s 00:09:22.014 user 0m0.035s 00:09:22.014 sys 0m0.032s 00:09:22.014 05:08:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:22.014 05:08:11 -- common/autotest_common.sh@10 -- # set +x 00:09:22.014 ************************************ 00:09:22.014 END TEST dd_invalid_arguments 00:09:22.014 ************************************ 00:09:22.271 05:08:11 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:09:22.271 05:08:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:22.271 05:08:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:22.271 05:08:11 -- common/autotest_common.sh@10 -- # set +x 00:09:22.271 ************************************ 00:09:22.271 START TEST dd_double_input 00:09:22.271 ************************************ 00:09:22.271 05:08:11 -- common/autotest_common.sh@1114 -- # double_input 00:09:22.271 05:08:11 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:22.271 05:08:11 -- common/autotest_common.sh@650 -- # local es=0 00:09:22.271 05:08:11 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:22.271 05:08:11 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.271 05:08:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:22.271 05:08:11 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.271 05:08:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:22.271 05:08:11 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.271 05:08:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:22.271 05:08:11 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.271 05:08:11 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:22.271 05:08:11 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:22.271 [2024-12-08 05:08:11.862117] spdk_dd.c:1467:main: *ERROR*: You may specify either --if or --ib, but not both. 00:09:22.271 05:08:11 -- common/autotest_common.sh@653 -- # es=22 00:09:22.271 05:08:11 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:22.271 05:08:11 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:22.271 05:08:11 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:22.271 00:09:22.271 real 0m0.070s 00:09:22.271 user 0m0.043s 00:09:22.271 sys 0m0.026s 00:09:22.271 05:08:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:22.271 ************************************ 00:09:22.271 END TEST dd_double_input 00:09:22.271 ************************************ 00:09:22.271 05:08:11 -- common/autotest_common.sh@10 -- # set +x 00:09:22.271 05:08:11 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:09:22.271 05:08:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:22.271 05:08:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:22.271 05:08:11 -- common/autotest_common.sh@10 -- # set +x 00:09:22.271 ************************************ 00:09:22.271 START TEST dd_double_output 00:09:22.271 ************************************ 00:09:22.271 05:08:11 -- common/autotest_common.sh@1114 -- # double_output 00:09:22.271 05:08:11 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:22.271 05:08:11 -- common/autotest_common.sh@650 -- # local es=0 00:09:22.271 05:08:11 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:22.271 05:08:11 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.271 05:08:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:22.271 05:08:11 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.271 05:08:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:22.271 05:08:11 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.271 05:08:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:22.271 05:08:11 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.271 05:08:11 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:22.271 05:08:11 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:22.271 [2024-12-08 05:08:11.987471] spdk_dd.c:1473:main: *ERROR*: You may specify either --of or --ob, but not both. 00:09:22.271 05:08:12 -- common/autotest_common.sh@653 -- # es=22 00:09:22.271 05:08:12 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:22.271 05:08:12 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:22.271 05:08:12 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:22.271 00:09:22.271 real 0m0.074s 00:09:22.271 user 0m0.049s 00:09:22.271 sys 0m0.023s 00:09:22.271 ************************************ 00:09:22.271 END TEST dd_double_output 00:09:22.271 ************************************ 00:09:22.271 05:08:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:22.271 05:08:12 -- common/autotest_common.sh@10 -- # set +x 00:09:22.271 05:08:12 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:09:22.271 05:08:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:22.271 05:08:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:22.271 05:08:12 -- common/autotest_common.sh@10 -- # set +x 00:09:22.272 ************************************ 00:09:22.272 START TEST dd_no_input 00:09:22.272 ************************************ 00:09:22.272 05:08:12 -- common/autotest_common.sh@1114 -- # no_input 00:09:22.272 05:08:12 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:22.272 05:08:12 -- common/autotest_common.sh@650 -- # local es=0 00:09:22.272 05:08:12 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:22.272 05:08:12 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.272 05:08:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:22.272 05:08:12 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.530 05:08:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:22.530 05:08:12 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.530 05:08:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:22.530 05:08:12 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.530 05:08:12 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:22.530 05:08:12 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:22.530 [2024-12-08 05:08:12.104190] spdk_dd.c:1479:main: *ERROR*: You must specify either --if or --ib 00:09:22.530 05:08:12 -- common/autotest_common.sh@653 -- # es=22 00:09:22.530 05:08:12 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:22.530 05:08:12 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:22.530 05:08:12 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:22.530 00:09:22.530 real 0m0.068s 00:09:22.530 user 0m0.044s 00:09:22.530 sys 0m0.023s 00:09:22.530 05:08:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:22.530 ************************************ 00:09:22.530 END TEST dd_no_input 00:09:22.530 ************************************ 00:09:22.530 05:08:12 -- common/autotest_common.sh@10 -- # set +x 00:09:22.530 05:08:12 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:09:22.530 05:08:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:22.530 05:08:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:22.530 05:08:12 -- common/autotest_common.sh@10 -- # set +x 00:09:22.530 ************************************ 00:09:22.530 START TEST dd_no_output 00:09:22.530 ************************************ 00:09:22.530 05:08:12 -- common/autotest_common.sh@1114 -- # no_output 00:09:22.530 05:08:12 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:22.530 05:08:12 -- common/autotest_common.sh@650 -- # local es=0 00:09:22.530 05:08:12 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:22.530 05:08:12 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.530 05:08:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:22.530 05:08:12 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.531 05:08:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:22.531 05:08:12 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.531 05:08:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:22.531 05:08:12 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.531 05:08:12 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:22.531 05:08:12 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:22.531 [2024-12-08 05:08:12.222626] spdk_dd.c:1485:main: *ERROR*: You must specify either --of or --ob 00:09:22.531 05:08:12 -- common/autotest_common.sh@653 -- # es=22 00:09:22.531 05:08:12 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:22.531 05:08:12 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:22.531 05:08:12 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:22.531 00:09:22.531 real 0m0.073s 00:09:22.531 user 0m0.045s 00:09:22.531 sys 0m0.027s 00:09:22.531 05:08:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:22.531 ************************************ 00:09:22.531 END TEST dd_no_output 00:09:22.531 ************************************ 00:09:22.531 05:08:12 -- common/autotest_common.sh@10 -- # set +x 00:09:22.531 05:08:12 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:09:22.531 05:08:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:22.531 05:08:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:22.531 05:08:12 -- common/autotest_common.sh@10 -- # set +x 00:09:22.531 ************************************ 00:09:22.531 START TEST dd_wrong_blocksize 00:09:22.531 ************************************ 00:09:22.531 05:08:12 -- common/autotest_common.sh@1114 -- # wrong_blocksize 00:09:22.531 05:08:12 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:22.531 05:08:12 -- common/autotest_common.sh@650 -- # local es=0 00:09:22.531 05:08:12 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:22.531 05:08:12 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.531 05:08:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:22.531 05:08:12 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.531 05:08:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:22.531 05:08:12 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.531 05:08:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:22.531 05:08:12 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.531 05:08:12 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:22.531 05:08:12 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:22.789 [2024-12-08 05:08:12.337316] spdk_dd.c:1491:main: *ERROR*: Invalid --bs value 00:09:22.789 05:08:12 -- common/autotest_common.sh@653 -- # es=22 00:09:22.789 05:08:12 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:22.789 05:08:12 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:22.789 05:08:12 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:22.789 00:09:22.789 real 0m0.068s 00:09:22.789 user 0m0.040s 00:09:22.789 sys 0m0.027s 00:09:22.789 05:08:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:22.789 05:08:12 -- common/autotest_common.sh@10 -- # set +x 00:09:22.789 ************************************ 00:09:22.789 END TEST dd_wrong_blocksize 00:09:22.789 ************************************ 00:09:22.789 05:08:12 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:09:22.789 05:08:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:22.789 05:08:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:22.789 05:08:12 -- common/autotest_common.sh@10 -- # set +x 00:09:22.789 ************************************ 00:09:22.789 START TEST dd_smaller_blocksize 00:09:22.789 ************************************ 00:09:22.789 05:08:12 -- common/autotest_common.sh@1114 -- # smaller_blocksize 00:09:22.789 05:08:12 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:22.789 05:08:12 -- common/autotest_common.sh@650 -- # local es=0 00:09:22.789 05:08:12 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:22.789 05:08:12 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.789 05:08:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:22.789 05:08:12 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.789 05:08:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:22.789 05:08:12 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.789 05:08:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:22.789 05:08:12 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.789 05:08:12 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:22.789 05:08:12 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:22.789 [2024-12-08 05:08:12.455092] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:22.789 [2024-12-08 05:08:12.455232] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71714 ] 00:09:23.048 [2024-12-08 05:08:12.594116] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.048 [2024-12-08 05:08:12.635087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.048 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:09:23.048 [2024-12-08 05:08:12.684973] spdk_dd.c:1168:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:09:23.048 [2024-12-08 05:08:12.685013] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:23.048 [2024-12-08 05:08:12.750614] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:09:23.048 05:08:12 -- common/autotest_common.sh@653 -- # es=244 00:09:23.048 05:08:12 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:23.048 05:08:12 -- common/autotest_common.sh@662 -- # es=116 00:09:23.048 05:08:12 -- common/autotest_common.sh@663 -- # case "$es" in 00:09:23.048 05:08:12 -- common/autotest_common.sh@670 -- # es=1 00:09:23.048 05:08:12 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:23.048 00:09:23.048 real 0m0.430s 00:09:23.048 user 0m0.218s 00:09:23.048 sys 0m0.106s 00:09:23.048 ************************************ 00:09:23.048 END TEST dd_smaller_blocksize 00:09:23.048 ************************************ 00:09:23.048 05:08:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:23.048 05:08:12 -- common/autotest_common.sh@10 -- # set +x 00:09:23.307 05:08:12 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:09:23.307 05:08:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:23.307 05:08:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:23.307 05:08:12 -- common/autotest_common.sh@10 -- # set +x 00:09:23.307 ************************************ 00:09:23.307 START TEST dd_invalid_count 00:09:23.307 ************************************ 00:09:23.307 05:08:12 -- common/autotest_common.sh@1114 -- # invalid_count 00:09:23.307 05:08:12 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:23.307 05:08:12 -- common/autotest_common.sh@650 -- # local es=0 00:09:23.307 05:08:12 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:23.307 05:08:12 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:23.307 05:08:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:23.307 05:08:12 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:23.307 05:08:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:23.307 05:08:12 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:23.307 05:08:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:23.307 05:08:12 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:23.307 05:08:12 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:23.307 05:08:12 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:23.307 [2024-12-08 05:08:12.932222] spdk_dd.c:1497:main: *ERROR*: Invalid --count value 00:09:23.307 05:08:12 -- common/autotest_common.sh@653 -- # es=22 00:09:23.307 05:08:12 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:23.307 05:08:12 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:23.307 05:08:12 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:23.307 00:09:23.307 real 0m0.072s 00:09:23.307 user 0m0.042s 00:09:23.307 sys 0m0.027s 00:09:23.307 05:08:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:23.307 ************************************ 00:09:23.307 END TEST dd_invalid_count 00:09:23.307 ************************************ 00:09:23.307 05:08:12 -- common/autotest_common.sh@10 -- # set +x 00:09:23.307 05:08:12 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:09:23.307 05:08:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:23.307 05:08:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:23.307 05:08:12 -- common/autotest_common.sh@10 -- # set +x 00:09:23.307 ************************************ 00:09:23.307 START TEST dd_invalid_oflag 00:09:23.307 ************************************ 00:09:23.307 05:08:12 -- common/autotest_common.sh@1114 -- # invalid_oflag 00:09:23.307 05:08:12 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:23.307 05:08:12 -- common/autotest_common.sh@650 -- # local es=0 00:09:23.307 05:08:12 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:23.307 05:08:12 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:23.307 05:08:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:23.307 05:08:12 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:23.307 05:08:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:23.307 05:08:13 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:23.307 05:08:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:23.307 05:08:13 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:23.307 05:08:13 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:23.307 05:08:13 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:23.307 [2024-12-08 05:08:13.049029] spdk_dd.c:1503:main: *ERROR*: --oflags may be used only with --of 00:09:23.307 05:08:13 -- common/autotest_common.sh@653 -- # es=22 00:09:23.307 05:08:13 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:23.307 05:08:13 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:23.307 05:08:13 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:23.307 00:09:23.307 real 0m0.067s 00:09:23.307 user 0m0.039s 00:09:23.307 sys 0m0.027s 00:09:23.307 ************************************ 00:09:23.307 END TEST dd_invalid_oflag 00:09:23.307 ************************************ 00:09:23.307 05:08:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:23.307 05:08:13 -- common/autotest_common.sh@10 -- # set +x 00:09:23.566 05:08:13 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:09:23.566 05:08:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:23.566 05:08:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:23.566 05:08:13 -- common/autotest_common.sh@10 -- # set +x 00:09:23.566 ************************************ 00:09:23.566 START TEST dd_invalid_iflag 00:09:23.566 ************************************ 00:09:23.566 05:08:13 -- common/autotest_common.sh@1114 -- # invalid_iflag 00:09:23.566 05:08:13 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:23.566 05:08:13 -- common/autotest_common.sh@650 -- # local es=0 00:09:23.566 05:08:13 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:23.566 05:08:13 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:23.566 05:08:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:23.566 05:08:13 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:23.566 05:08:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:23.566 05:08:13 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:23.566 05:08:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:23.566 05:08:13 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:23.566 05:08:13 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:23.566 05:08:13 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:23.566 [2024-12-08 05:08:13.166549] spdk_dd.c:1509:main: *ERROR*: --iflags may be used only with --if 00:09:23.566 05:08:13 -- common/autotest_common.sh@653 -- # es=22 00:09:23.566 05:08:13 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:23.566 ************************************ 00:09:23.566 END TEST dd_invalid_iflag 00:09:23.566 ************************************ 00:09:23.566 05:08:13 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:23.566 05:08:13 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:23.566 00:09:23.566 real 0m0.069s 00:09:23.566 user 0m0.045s 00:09:23.566 sys 0m0.023s 00:09:23.566 05:08:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:23.566 05:08:13 -- common/autotest_common.sh@10 -- # set +x 00:09:23.566 05:08:13 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:09:23.566 05:08:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:23.566 05:08:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:23.566 05:08:13 -- common/autotest_common.sh@10 -- # set +x 00:09:23.566 ************************************ 00:09:23.566 START TEST dd_unknown_flag 00:09:23.566 ************************************ 00:09:23.566 05:08:13 -- common/autotest_common.sh@1114 -- # unknown_flag 00:09:23.566 05:08:13 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:23.566 05:08:13 -- common/autotest_common.sh@650 -- # local es=0 00:09:23.566 05:08:13 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:23.566 05:08:13 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:23.566 05:08:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:23.566 05:08:13 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:23.566 05:08:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:23.566 05:08:13 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:23.567 05:08:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:23.567 05:08:13 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:23.567 05:08:13 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:23.567 05:08:13 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:23.567 [2024-12-08 05:08:13.280909] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:23.567 [2024-12-08 05:08:13.281029] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71801 ] 00:09:23.825 [2024-12-08 05:08:13.420001] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.825 [2024-12-08 05:08:13.459846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.825 [2024-12-08 05:08:13.510185] spdk_dd.c: 985:parse_flags: *ERROR*: Unknown file flag: -1 00:09:23.825 [2024-12-08 05:08:13.510264] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:09:23.825 [2024-12-08 05:08:13.510280] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:09:23.825 [2024-12-08 05:08:13.510293] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:23.825 [2024-12-08 05:08:13.575179] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:09:24.137 05:08:13 -- common/autotest_common.sh@653 -- # es=236 00:09:24.137 05:08:13 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:24.137 05:08:13 -- common/autotest_common.sh@662 -- # es=108 00:09:24.137 ************************************ 00:09:24.137 END TEST dd_unknown_flag 00:09:24.137 ************************************ 00:09:24.137 05:08:13 -- common/autotest_common.sh@663 -- # case "$es" in 00:09:24.137 05:08:13 -- common/autotest_common.sh@670 -- # es=1 00:09:24.137 05:08:13 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:24.137 00:09:24.137 real 0m0.422s 00:09:24.137 user 0m0.220s 00:09:24.137 sys 0m0.097s 00:09:24.137 05:08:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:24.137 05:08:13 -- common/autotest_common.sh@10 -- # set +x 00:09:24.137 05:08:13 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:09:24.137 05:08:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:24.137 05:08:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:24.137 05:08:13 -- common/autotest_common.sh@10 -- # set +x 00:09:24.137 ************************************ 00:09:24.137 START TEST dd_invalid_json 00:09:24.137 ************************************ 00:09:24.137 05:08:13 -- common/autotest_common.sh@1114 -- # invalid_json 00:09:24.137 05:08:13 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:24.137 05:08:13 -- common/autotest_common.sh@650 -- # local es=0 00:09:24.137 05:08:13 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:24.137 05:08:13 -- dd/negative_dd.sh@95 -- # : 00:09:24.137 05:08:13 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:24.137 05:08:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:24.137 05:08:13 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:24.137 05:08:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:24.137 05:08:13 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:24.137 05:08:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:24.137 05:08:13 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:24.137 05:08:13 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:24.138 05:08:13 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:24.138 [2024-12-08 05:08:13.758952] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:24.138 [2024-12-08 05:08:13.759063] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71828 ] 00:09:24.138 [2024-12-08 05:08:13.900586] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.396 [2024-12-08 05:08:13.939978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.396 [2024-12-08 05:08:13.940131] json_config.c: 529:app_json_config_read: *ERROR*: Parsing JSON configuration failed (-2) 00:09:24.396 [2024-12-08 05:08:13.940154] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:24.396 [2024-12-08 05:08:13.940201] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:09:24.396 05:08:14 -- common/autotest_common.sh@653 -- # es=234 00:09:24.396 05:08:14 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:24.396 05:08:14 -- common/autotest_common.sh@662 -- # es=106 00:09:24.396 05:08:14 -- common/autotest_common.sh@663 -- # case "$es" in 00:09:24.396 05:08:14 -- common/autotest_common.sh@670 -- # es=1 00:09:24.396 05:08:14 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:24.396 00:09:24.396 real 0m0.315s 00:09:24.396 user 0m0.150s 00:09:24.396 sys 0m0.063s 00:09:24.396 05:08:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:24.396 05:08:14 -- common/autotest_common.sh@10 -- # set +x 00:09:24.396 ************************************ 00:09:24.396 END TEST dd_invalid_json 00:09:24.396 ************************************ 00:09:24.396 00:09:24.396 real 0m2.556s 00:09:24.396 user 0m1.264s 00:09:24.396 sys 0m0.934s 00:09:24.396 05:08:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:24.396 05:08:14 -- common/autotest_common.sh@10 -- # set +x 00:09:24.396 ************************************ 00:09:24.396 END TEST spdk_dd_negative 00:09:24.396 ************************************ 00:09:24.396 00:09:24.396 real 1m6.672s 00:09:24.396 user 0m40.654s 00:09:24.396 sys 0m16.757s 00:09:24.396 ************************************ 00:09:24.396 END TEST spdk_dd 00:09:24.396 ************************************ 00:09:24.396 05:08:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:24.396 05:08:14 -- common/autotest_common.sh@10 -- # set +x 00:09:24.396 05:08:14 -- spdk/autotest.sh@204 -- # '[' 0 -eq 1 ']' 00:09:24.396 05:08:14 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:09:24.396 05:08:14 -- spdk/autotest.sh@255 -- # timing_exit lib 00:09:24.396 05:08:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:24.396 05:08:14 -- common/autotest_common.sh@10 -- # set +x 00:09:24.396 05:08:14 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:09:24.396 05:08:14 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:09:24.396 05:08:14 -- spdk/autotest.sh@274 -- # '[' 1 -eq 1 ']' 00:09:24.396 05:08:14 -- spdk/autotest.sh@275 -- # export NET_TYPE 00:09:24.396 05:08:14 -- spdk/autotest.sh@278 -- # '[' tcp = rdma ']' 00:09:24.396 05:08:14 -- spdk/autotest.sh@281 -- # '[' tcp = tcp ']' 00:09:24.397 05:08:14 -- spdk/autotest.sh@282 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:24.397 05:08:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:24.397 05:08:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:24.397 05:08:14 -- common/autotest_common.sh@10 -- # set +x 00:09:24.656 ************************************ 00:09:24.656 START TEST nvmf_tcp 00:09:24.656 ************************************ 00:09:24.656 05:08:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:24.656 * Looking for test storage... 00:09:24.656 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:24.656 05:08:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:24.656 05:08:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:24.656 05:08:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:24.656 05:08:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:24.656 05:08:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:24.656 05:08:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:24.656 05:08:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:24.656 05:08:14 -- scripts/common.sh@335 -- # IFS=.-: 00:09:24.656 05:08:14 -- scripts/common.sh@335 -- # read -ra ver1 00:09:24.656 05:08:14 -- scripts/common.sh@336 -- # IFS=.-: 00:09:24.656 05:08:14 -- scripts/common.sh@336 -- # read -ra ver2 00:09:24.656 05:08:14 -- scripts/common.sh@337 -- # local 'op=<' 00:09:24.656 05:08:14 -- scripts/common.sh@339 -- # ver1_l=2 00:09:24.656 05:08:14 -- scripts/common.sh@340 -- # ver2_l=1 00:09:24.656 05:08:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:24.656 05:08:14 -- scripts/common.sh@343 -- # case "$op" in 00:09:24.656 05:08:14 -- scripts/common.sh@344 -- # : 1 00:09:24.656 05:08:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:24.656 05:08:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:24.656 05:08:14 -- scripts/common.sh@364 -- # decimal 1 00:09:24.656 05:08:14 -- scripts/common.sh@352 -- # local d=1 00:09:24.656 05:08:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:24.656 05:08:14 -- scripts/common.sh@354 -- # echo 1 00:09:24.656 05:08:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:24.656 05:08:14 -- scripts/common.sh@365 -- # decimal 2 00:09:24.656 05:08:14 -- scripts/common.sh@352 -- # local d=2 00:09:24.656 05:08:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:24.656 05:08:14 -- scripts/common.sh@354 -- # echo 2 00:09:24.656 05:08:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:24.656 05:08:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:24.656 05:08:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:24.656 05:08:14 -- scripts/common.sh@367 -- # return 0 00:09:24.656 05:08:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:24.656 05:08:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:24.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.656 --rc genhtml_branch_coverage=1 00:09:24.656 --rc genhtml_function_coverage=1 00:09:24.656 --rc genhtml_legend=1 00:09:24.656 --rc geninfo_all_blocks=1 00:09:24.656 --rc geninfo_unexecuted_blocks=1 00:09:24.656 00:09:24.656 ' 00:09:24.656 05:08:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:24.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.656 --rc genhtml_branch_coverage=1 00:09:24.656 --rc genhtml_function_coverage=1 00:09:24.656 --rc genhtml_legend=1 00:09:24.656 --rc geninfo_all_blocks=1 00:09:24.656 --rc geninfo_unexecuted_blocks=1 00:09:24.656 00:09:24.656 ' 00:09:24.656 05:08:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:24.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.656 --rc genhtml_branch_coverage=1 00:09:24.656 --rc genhtml_function_coverage=1 00:09:24.656 --rc genhtml_legend=1 00:09:24.656 --rc geninfo_all_blocks=1 00:09:24.656 --rc geninfo_unexecuted_blocks=1 00:09:24.656 00:09:24.656 ' 00:09:24.656 05:08:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:24.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.657 --rc genhtml_branch_coverage=1 00:09:24.657 --rc genhtml_function_coverage=1 00:09:24.657 --rc genhtml_legend=1 00:09:24.657 --rc geninfo_all_blocks=1 00:09:24.657 --rc geninfo_unexecuted_blocks=1 00:09:24.657 00:09:24.657 ' 00:09:24.657 05:08:14 -- nvmf/nvmf.sh@10 -- # uname -s 00:09:24.657 05:08:14 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:24.657 05:08:14 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:24.657 05:08:14 -- nvmf/common.sh@7 -- # uname -s 00:09:24.657 05:08:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:24.657 05:08:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:24.657 05:08:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:24.657 05:08:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:24.657 05:08:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:24.657 05:08:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:24.657 05:08:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:24.657 05:08:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:24.657 05:08:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:24.657 05:08:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:24.657 05:08:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:09:24.657 05:08:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:09:24.657 05:08:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:24.657 05:08:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:24.657 05:08:14 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:24.657 05:08:14 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:24.657 05:08:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:24.657 05:08:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:24.657 05:08:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:24.657 05:08:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.657 05:08:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.657 05:08:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.657 05:08:14 -- paths/export.sh@5 -- # export PATH 00:09:24.657 05:08:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.657 05:08:14 -- nvmf/common.sh@46 -- # : 0 00:09:24.657 05:08:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:24.657 05:08:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:24.657 05:08:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:24.657 05:08:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:24.657 05:08:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:24.657 05:08:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:24.657 05:08:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:24.657 05:08:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:24.657 05:08:14 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:24.657 05:08:14 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:09:24.657 05:08:14 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:09:24.657 05:08:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:24.657 05:08:14 -- common/autotest_common.sh@10 -- # set +x 00:09:24.657 05:08:14 -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:09:24.657 05:08:14 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:24.657 05:08:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:24.657 05:08:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:24.657 05:08:14 -- common/autotest_common.sh@10 -- # set +x 00:09:24.657 ************************************ 00:09:24.657 START TEST nvmf_host_management 00:09:24.657 ************************************ 00:09:24.657 05:08:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:24.916 * Looking for test storage... 00:09:24.916 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:24.916 05:08:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:24.916 05:08:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:24.916 05:08:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:24.916 05:08:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:24.916 05:08:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:24.916 05:08:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:24.916 05:08:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:24.916 05:08:14 -- scripts/common.sh@335 -- # IFS=.-: 00:09:24.916 05:08:14 -- scripts/common.sh@335 -- # read -ra ver1 00:09:24.916 05:08:14 -- scripts/common.sh@336 -- # IFS=.-: 00:09:24.916 05:08:14 -- scripts/common.sh@336 -- # read -ra ver2 00:09:24.916 05:08:14 -- scripts/common.sh@337 -- # local 'op=<' 00:09:24.916 05:08:14 -- scripts/common.sh@339 -- # ver1_l=2 00:09:24.916 05:08:14 -- scripts/common.sh@340 -- # ver2_l=1 00:09:24.916 05:08:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:24.916 05:08:14 -- scripts/common.sh@343 -- # case "$op" in 00:09:24.916 05:08:14 -- scripts/common.sh@344 -- # : 1 00:09:24.916 05:08:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:24.916 05:08:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:24.916 05:08:14 -- scripts/common.sh@364 -- # decimal 1 00:09:24.916 05:08:14 -- scripts/common.sh@352 -- # local d=1 00:09:24.916 05:08:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:24.916 05:08:14 -- scripts/common.sh@354 -- # echo 1 00:09:24.916 05:08:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:24.916 05:08:14 -- scripts/common.sh@365 -- # decimal 2 00:09:24.916 05:08:14 -- scripts/common.sh@352 -- # local d=2 00:09:24.916 05:08:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:24.916 05:08:14 -- scripts/common.sh@354 -- # echo 2 00:09:24.916 05:08:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:24.916 05:08:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:24.916 05:08:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:24.916 05:08:14 -- scripts/common.sh@367 -- # return 0 00:09:24.916 05:08:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:24.916 05:08:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:24.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.916 --rc genhtml_branch_coverage=1 00:09:24.916 --rc genhtml_function_coverage=1 00:09:24.916 --rc genhtml_legend=1 00:09:24.916 --rc geninfo_all_blocks=1 00:09:24.916 --rc geninfo_unexecuted_blocks=1 00:09:24.916 00:09:24.916 ' 00:09:24.916 05:08:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:24.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.916 --rc genhtml_branch_coverage=1 00:09:24.916 --rc genhtml_function_coverage=1 00:09:24.916 --rc genhtml_legend=1 00:09:24.916 --rc geninfo_all_blocks=1 00:09:24.916 --rc geninfo_unexecuted_blocks=1 00:09:24.916 00:09:24.916 ' 00:09:24.916 05:08:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:24.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.916 --rc genhtml_branch_coverage=1 00:09:24.916 --rc genhtml_function_coverage=1 00:09:24.916 --rc genhtml_legend=1 00:09:24.916 --rc geninfo_all_blocks=1 00:09:24.916 --rc geninfo_unexecuted_blocks=1 00:09:24.916 00:09:24.916 ' 00:09:24.916 05:08:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:24.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.916 --rc genhtml_branch_coverage=1 00:09:24.916 --rc genhtml_function_coverage=1 00:09:24.916 --rc genhtml_legend=1 00:09:24.916 --rc geninfo_all_blocks=1 00:09:24.916 --rc geninfo_unexecuted_blocks=1 00:09:24.916 00:09:24.916 ' 00:09:24.916 05:08:14 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:24.916 05:08:14 -- nvmf/common.sh@7 -- # uname -s 00:09:24.916 05:08:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:24.916 05:08:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:24.916 05:08:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:24.916 05:08:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:24.916 05:08:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:24.916 05:08:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:24.916 05:08:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:24.916 05:08:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:24.916 05:08:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:24.916 05:08:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:24.916 05:08:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:09:24.917 05:08:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:09:24.917 05:08:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:24.917 05:08:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:24.917 05:08:14 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:24.917 05:08:14 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:24.917 05:08:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:24.917 05:08:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:24.917 05:08:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:24.917 05:08:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.917 05:08:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.917 05:08:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.917 05:08:14 -- paths/export.sh@5 -- # export PATH 00:09:24.917 05:08:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.917 05:08:14 -- nvmf/common.sh@46 -- # : 0 00:09:24.917 05:08:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:24.917 05:08:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:24.917 05:08:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:24.917 05:08:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:24.917 05:08:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:24.917 05:08:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:24.917 05:08:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:24.917 05:08:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:24.917 05:08:14 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:24.917 05:08:14 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:24.917 05:08:14 -- target/host_management.sh@104 -- # nvmftestinit 00:09:24.917 05:08:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:24.917 05:08:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:24.917 05:08:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:24.917 05:08:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:24.917 05:08:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:24.917 05:08:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.917 05:08:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:24.917 05:08:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.917 05:08:14 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:24.917 05:08:14 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:24.917 05:08:14 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:24.917 05:08:14 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:24.917 05:08:14 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:24.917 05:08:14 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:24.917 05:08:14 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:24.917 05:08:14 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:24.917 05:08:14 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:24.917 05:08:14 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:24.917 05:08:14 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:24.917 05:08:14 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:24.917 05:08:14 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:24.917 05:08:14 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:24.917 05:08:14 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:24.917 05:08:14 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:24.917 05:08:14 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:24.917 05:08:14 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:24.917 05:08:14 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:24.917 Cannot find device "nvmf_init_br" 00:09:24.917 05:08:14 -- nvmf/common.sh@153 -- # true 00:09:24.917 05:08:14 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:24.917 Cannot find device "nvmf_tgt_br" 00:09:24.917 05:08:14 -- nvmf/common.sh@154 -- # true 00:09:24.917 05:08:14 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:24.917 Cannot find device "nvmf_tgt_br2" 00:09:24.917 05:08:14 -- nvmf/common.sh@155 -- # true 00:09:24.917 05:08:14 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:24.917 Cannot find device "nvmf_init_br" 00:09:24.917 05:08:14 -- nvmf/common.sh@156 -- # true 00:09:24.917 05:08:14 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:24.917 Cannot find device "nvmf_tgt_br" 00:09:24.917 05:08:14 -- nvmf/common.sh@157 -- # true 00:09:24.917 05:08:14 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:24.917 Cannot find device "nvmf_tgt_br2" 00:09:24.917 05:08:14 -- nvmf/common.sh@158 -- # true 00:09:24.917 05:08:14 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:24.917 Cannot find device "nvmf_br" 00:09:24.917 05:08:14 -- nvmf/common.sh@159 -- # true 00:09:24.917 05:08:14 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:24.917 Cannot find device "nvmf_init_if" 00:09:24.917 05:08:14 -- nvmf/common.sh@160 -- # true 00:09:24.917 05:08:14 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:24.917 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:24.917 05:08:14 -- nvmf/common.sh@161 -- # true 00:09:24.917 05:08:14 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:24.917 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:24.917 05:08:14 -- nvmf/common.sh@162 -- # true 00:09:24.917 05:08:14 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:24.917 05:08:14 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:24.917 05:08:14 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:25.175 05:08:14 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:25.176 05:08:14 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:25.176 05:08:14 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:25.176 05:08:14 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:25.176 05:08:14 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:25.176 05:08:14 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:25.176 05:08:14 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:25.176 05:08:14 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:25.176 05:08:14 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:25.176 05:08:14 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:25.176 05:08:14 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:25.176 05:08:14 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:25.176 05:08:14 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:25.176 05:08:14 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:25.176 05:08:14 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:25.176 05:08:14 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:25.176 05:08:14 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:25.176 05:08:14 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:25.176 05:08:14 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:25.433 05:08:15 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:25.433 05:08:15 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:25.433 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:25.433 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:09:25.433 00:09:25.433 --- 10.0.0.2 ping statistics --- 00:09:25.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:25.433 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:09:25.433 05:08:15 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:25.433 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:25.433 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:09:25.433 00:09:25.433 --- 10.0.0.3 ping statistics --- 00:09:25.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:25.433 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:09:25.433 05:08:15 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:25.433 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:25.433 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:09:25.433 00:09:25.433 --- 10.0.0.1 ping statistics --- 00:09:25.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:25.433 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:09:25.433 05:08:15 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:25.433 05:08:15 -- nvmf/common.sh@421 -- # return 0 00:09:25.433 05:08:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:25.433 05:08:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:25.433 05:08:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:25.433 05:08:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:25.433 05:08:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:25.433 05:08:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:25.433 05:08:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:25.433 05:08:15 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:09:25.433 05:08:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:25.433 05:08:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:25.433 05:08:15 -- common/autotest_common.sh@10 -- # set +x 00:09:25.433 ************************************ 00:09:25.433 START TEST nvmf_host_management 00:09:25.433 ************************************ 00:09:25.433 05:08:15 -- common/autotest_common.sh@1114 -- # nvmf_host_management 00:09:25.433 05:08:15 -- target/host_management.sh@69 -- # starttarget 00:09:25.433 05:08:15 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:09:25.433 05:08:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:25.433 05:08:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:25.433 05:08:15 -- common/autotest_common.sh@10 -- # set +x 00:09:25.433 05:08:15 -- nvmf/common.sh@469 -- # nvmfpid=72097 00:09:25.433 05:08:15 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:09:25.433 05:08:15 -- nvmf/common.sh@470 -- # waitforlisten 72097 00:09:25.433 05:08:15 -- common/autotest_common.sh@829 -- # '[' -z 72097 ']' 00:09:25.433 05:08:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.433 05:08:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:25.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.433 05:08:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.433 05:08:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:25.433 05:08:15 -- common/autotest_common.sh@10 -- # set +x 00:09:25.433 [2024-12-08 05:08:15.180902] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:25.433 [2024-12-08 05:08:15.181034] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:25.691 [2024-12-08 05:08:15.330140] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:25.691 [2024-12-08 05:08:15.367317] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:25.691 [2024-12-08 05:08:15.367470] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:25.691 [2024-12-08 05:08:15.367484] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:25.691 [2024-12-08 05:08:15.367492] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:25.691 [2024-12-08 05:08:15.367693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:25.691 [2024-12-08 05:08:15.368220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:25.691 [2024-12-08 05:08:15.368343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:09:25.691 [2024-12-08 05:08:15.368348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:26.625 05:08:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:26.625 05:08:16 -- common/autotest_common.sh@862 -- # return 0 00:09:26.625 05:08:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:26.625 05:08:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:26.625 05:08:16 -- common/autotest_common.sh@10 -- # set +x 00:09:26.625 05:08:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:26.625 05:08:16 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:26.625 05:08:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.625 05:08:16 -- common/autotest_common.sh@10 -- # set +x 00:09:26.625 [2024-12-08 05:08:16.215506] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:26.625 05:08:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.625 05:08:16 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:09:26.625 05:08:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:26.625 05:08:16 -- common/autotest_common.sh@10 -- # set +x 00:09:26.625 05:08:16 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:26.625 05:08:16 -- target/host_management.sh@23 -- # cat 00:09:26.625 05:08:16 -- target/host_management.sh@30 -- # rpc_cmd 00:09:26.625 05:08:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.625 05:08:16 -- common/autotest_common.sh@10 -- # set +x 00:09:26.625 Malloc0 00:09:26.625 [2024-12-08 05:08:16.289519] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:26.625 05:08:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.625 05:08:16 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:09:26.625 05:08:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:26.625 05:08:16 -- common/autotest_common.sh@10 -- # set +x 00:09:26.625 05:08:16 -- target/host_management.sh@73 -- # perfpid=72157 00:09:26.625 05:08:16 -- target/host_management.sh@74 -- # waitforlisten 72157 /var/tmp/bdevperf.sock 00:09:26.625 05:08:16 -- common/autotest_common.sh@829 -- # '[' -z 72157 ']' 00:09:26.625 05:08:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:26.625 05:08:16 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:09:26.625 05:08:16 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:09:26.625 05:08:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:26.625 05:08:16 -- nvmf/common.sh@520 -- # config=() 00:09:26.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:26.625 05:08:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:26.625 05:08:16 -- nvmf/common.sh@520 -- # local subsystem config 00:09:26.625 05:08:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:26.625 05:08:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:26.626 05:08:16 -- common/autotest_common.sh@10 -- # set +x 00:09:26.626 05:08:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:26.626 { 00:09:26.626 "params": { 00:09:26.626 "name": "Nvme$subsystem", 00:09:26.626 "trtype": "$TEST_TRANSPORT", 00:09:26.626 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:26.626 "adrfam": "ipv4", 00:09:26.626 "trsvcid": "$NVMF_PORT", 00:09:26.626 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:26.626 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:26.626 "hdgst": ${hdgst:-false}, 00:09:26.626 "ddgst": ${ddgst:-false} 00:09:26.626 }, 00:09:26.626 "method": "bdev_nvme_attach_controller" 00:09:26.626 } 00:09:26.626 EOF 00:09:26.626 )") 00:09:26.626 05:08:16 -- nvmf/common.sh@542 -- # cat 00:09:26.626 05:08:16 -- nvmf/common.sh@544 -- # jq . 00:09:26.626 05:08:16 -- nvmf/common.sh@545 -- # IFS=, 00:09:26.626 05:08:16 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:26.626 "params": { 00:09:26.626 "name": "Nvme0", 00:09:26.626 "trtype": "tcp", 00:09:26.626 "traddr": "10.0.0.2", 00:09:26.626 "adrfam": "ipv4", 00:09:26.626 "trsvcid": "4420", 00:09:26.626 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:26.626 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:26.626 "hdgst": false, 00:09:26.626 "ddgst": false 00:09:26.626 }, 00:09:26.626 "method": "bdev_nvme_attach_controller" 00:09:26.626 }' 00:09:26.626 [2024-12-08 05:08:16.377597] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:26.626 [2024-12-08 05:08:16.377711] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72157 ] 00:09:26.883 [2024-12-08 05:08:16.511736] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.883 [2024-12-08 05:08:16.552462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.142 Running I/O for 10 seconds... 00:09:28.078 05:08:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:28.078 05:08:17 -- common/autotest_common.sh@862 -- # return 0 00:09:28.078 05:08:17 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:09:28.078 05:08:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.078 05:08:17 -- common/autotest_common.sh@10 -- # set +x 00:09:28.078 05:08:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.078 05:08:17 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:28.078 05:08:17 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:09:28.078 05:08:17 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:09:28.078 05:08:17 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:09:28.078 05:08:17 -- target/host_management.sh@52 -- # local ret=1 00:09:28.078 05:08:17 -- target/host_management.sh@53 -- # local i 00:09:28.078 05:08:17 -- target/host_management.sh@54 -- # (( i = 10 )) 00:09:28.078 05:08:17 -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:28.078 05:08:17 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:28.078 05:08:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.078 05:08:17 -- common/autotest_common.sh@10 -- # set +x 00:09:28.078 05:08:17 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:28.078 05:08:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.078 05:08:17 -- target/host_management.sh@55 -- # read_io_count=2126 00:09:28.078 05:08:17 -- target/host_management.sh@58 -- # '[' 2126 -ge 100 ']' 00:09:28.078 05:08:17 -- target/host_management.sh@59 -- # ret=0 00:09:28.078 05:08:17 -- target/host_management.sh@60 -- # break 00:09:28.078 05:08:17 -- target/host_management.sh@64 -- # return 0 00:09:28.078 05:08:17 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:28.078 05:08:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.078 05:08:17 -- common/autotest_common.sh@10 -- # set +x 00:09:28.078 [2024-12-08 05:08:17.747085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.078 [2024-12-08 05:08:17.747144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.078 [2024-12-08 05:08:17.747173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.078 [2024-12-08 05:08:17.747185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.078 [2024-12-08 05:08:17.747198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.078 [2024-12-08 05:08:17.747212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.078 05:08:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.078 [2024-12-08 05:08:17.747231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.078 [2024-12-08 05:08:17.747248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.078 [2024-12-08 05:08:17.747270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.078 [2024-12-08 05:08:17.747287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.078 [2024-12-08 05:08:17.747306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.078 [2024-12-08 05:08:17.747321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.078 [2024-12-08 05:08:17.747341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.078 [2024-12-08 05:08:17.747356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.078 [2024-12-08 05:08:17.747374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.078 [2024-12-08 05:08:17.747390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.078 [2024-12-08 05:08:17.747408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.078 [2024-12-08 05:08:17.747424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.078 05:08:17 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:28.078 [2024-12-08 05:08:17.747445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.078 [2024-12-08 05:08:17.747461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.078 [2024-12-08 05:08:17.747480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.078 [2024-12-08 05:08:17.747497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.078 [2024-12-08 05:08:17.747516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.078 [2024-12-08 05:08:17.747532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.078 [2024-12-08 05:08:17.747550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.078 [2024-12-08 05:08:17.747565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.078 [2024-12-08 05:08:17.747583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.078 [2024-12-08 05:08:17.747598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.078 [2024-12-08 05:08:17.747617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.078 05:08:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.078 [2024-12-08 05:08:17.747632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.078 [2024-12-08 05:08:17.747652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.078 [2024-12-08 05:08:17.747669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.078 [2024-12-08 05:08:17.747712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.078 [2024-12-08 05:08:17.747728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.078 [2024-12-08 05:08:17.747740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.078 [2024-12-08 05:08:17.747750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.079 [2024-12-08 05:08:17.747762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.079 [2024-12-08 05:08:17.747776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.079 [2024-12-08 05:08:17.747795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.079 05:08:17 -- common/autotest_common.sh@10 -- # set +x 00:09:28.079 [2024-12-08 05:08:17.747809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.079 [2024-12-08 05:08:17.747822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.079 [2024-12-08 05:08:17.747831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.079 [2024-12-08 05:08:17.747843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.079 [2024-12-08 05:08:17.747852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.079 [2024-12-08 05:08:17.747864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.079 [2024-12-08 05:08:17.747873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.079 [2024-12-08 05:08:17.747885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.079 [2024-12-08 05:08:17.747894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.079 [2024-12-08 05:08:17.747905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.079 [2024-12-08 05:08:17.747915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.079 [2024-12-08 05:08:17.747926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.079 [2024-12-08 05:08:17.747935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.079 [2024-12-08 05:08:17.747947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.079 [2024-12-08 05:08:17.747956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.079 [2024-12-08 05:08:17.747968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.079 [2024-12-08 05:08:17.747983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.079 [2024-12-08 05:08:17.747995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.079 [2024-12-08 05:08:17.748004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.079 [2024-12-08 05:08:17.748028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.079 [2024-12-08 05:08:17.748042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.079 [2024-12-08 05:08:17.748054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.079 [2024-12-08 05:08:17.748064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.079 [2024-12-08 05:08:17.748075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.079 [2024-12-08 05:08:17.748084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.079 [2024-12-08 05:08:17.748096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.079 [2024-12-08 05:08:17.748105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.079 [2024-12-08 05:08:17.748116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.079 [2024-12-08 05:08:17.748125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.079 [2024-12-08 05:08:17.748136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.079 [2024-12-08 05:08:17.748146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.079 [2024-12-08 05:08:17.748157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.079 [2024-12-08 05:08:17.748166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.079 [2024-12-08 05:08:17.748177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.079 [2024-12-08 05:08:17.748186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.079 [2024-12-08 05:08:17.748198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.079 [2024-12-08 05:08:17.748207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.079 [2024-12-08 05:08:17.748218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.079 [2024-12-08 05:08:17.748227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.079 [2024-12-08 05:08:17.748239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.079 [2024-12-08 05:08:17.748248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.079 [2024-12-08 05:08:17.748259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.079 [2024-12-08 05:08:17.748268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.079 [2024-12-08 05:08:17.748279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.079 [2024-12-08 05:08:17.748289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.079 [2024-12-08 05:08:17.748300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.079 [2024-12-08 05:08:17.748310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.079 [2024-12-08 05:08:17.748321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.079 [2024-12-08 05:08:17.748341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.079 [2024-12-08 05:08:17.748354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.079 [2024-12-08 05:08:17.748363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.079 [2024-12-08 05:08:17.748375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.079 [2024-12-08 05:08:17.748384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.079 [2024-12-08 05:08:17.748395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.079 [2024-12-08 05:08:17.748406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.079 [2024-12-08 05:08:17.748417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.079 [2024-12-08 05:08:17.748426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.079 [2024-12-08 05:08:17.748438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.079 [2024-12-08 05:08:17.748447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.079 [2024-12-08 05:08:17.748458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.079 [2024-12-08 05:08:17.748467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.079 [2024-12-08 05:08:17.748478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.079 [2024-12-08 05:08:17.748488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.079 [2024-12-08 05:08:17.748499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.079 [2024-12-08 05:08:17.748508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.079 [2024-12-08 05:08:17.748520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.079 [2024-12-08 05:08:17.748529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.079 [2024-12-08 05:08:17.748540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.079 [2024-12-08 05:08:17.748549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.079 [2024-12-08 05:08:17.748560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.079 [2024-12-08 05:08:17.748569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.079 [2024-12-08 05:08:17.748581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.079 [2024-12-08 05:08:17.748590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.079 [2024-12-08 05:08:17.748602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.079 [2024-12-08 05:08:17.748611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.080 [2024-12-08 05:08:17.748622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.080 [2024-12-08 05:08:17.748631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.080 [2024-12-08 05:08:17.748642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.080 [2024-12-08 05:08:17.748652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.080 [2024-12-08 05:08:17.748663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.080 [2024-12-08 05:08:17.748689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.080 [2024-12-08 05:08:17.748703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.080 [2024-12-08 05:08:17.748713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.080 [2024-12-08 05:08:17.748725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.080 [2024-12-08 05:08:17.748734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.080 [2024-12-08 05:08:17.748746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.080 [2024-12-08 05:08:17.748755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.080 [2024-12-08 05:08:17.748769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:28.080 [2024-12-08 05:08:17.748785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.080 [2024-12-08 05:08:17.748801] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a44460 is same with the state(5) to be set 00:09:28.080 [2024-12-08 05:08:17.748858] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a44460 was disconnected and freed. reset controller. 00:09:28.080 [2024-12-08 05:08:17.748989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:09:28.080 [2024-12-08 05:08:17.749007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.080 [2024-12-08 05:08:17.749018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:09:28.080 [2024-12-08 05:08:17.749028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.080 [2024-12-08 05:08:17.749037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:09:28.080 [2024-12-08 05:08:17.749047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.080 [2024-12-08 05:08:17.749057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:09:28.080 [2024-12-08 05:08:17.749066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.080 [2024-12-08 05:08:17.749075] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45da0 is same with the state(5) to be set 00:09:28.080 [2024-12-08 05:08:17.750241] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:09:28.080 task offset: 27776 on job bdev=Nvme0n1 fails 00:09:28.080 00:09:28.080 Latency(us) 00:09:28.080 [2024-12-08T05:08:17.866Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:28.080 [2024-12-08T05:08:17.866Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:28.080 [2024-12-08T05:08:17.866Z] Job: Nvme0n1 ended in about 1.05 seconds with error 00:09:28.080 Verification LBA range: start 0x0 length 0x400 00:09:28.080 Nvme0n1 : 1.05 2134.66 133.42 60.69 0.00 28802.83 2412.92 39559.91 00:09:28.080 [2024-12-08T05:08:17.866Z] =================================================================================================================== 00:09:28.080 [2024-12-08T05:08:17.866Z] Total : 2134.66 133.42 60.69 0.00 28802.83 2412.92 39559.91 00:09:28.080 [2024-12-08 05:08:17.752801] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:28.080 [2024-12-08 05:08:17.752849] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a45da0 (9): Bad file descriptor 00:09:28.080 05:08:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.080 05:08:17 -- target/host_management.sh@87 -- # sleep 1 00:09:28.080 [2024-12-08 05:08:17.762040] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:29.040 05:08:18 -- target/host_management.sh@91 -- # kill -9 72157 00:09:29.040 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (72157) - No such process 00:09:29.040 05:08:18 -- target/host_management.sh@91 -- # true 00:09:29.040 05:08:18 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:09:29.040 05:08:18 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:09:29.040 05:08:18 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:09:29.040 05:08:18 -- nvmf/common.sh@520 -- # config=() 00:09:29.040 05:08:18 -- nvmf/common.sh@520 -- # local subsystem config 00:09:29.040 05:08:18 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:29.040 05:08:18 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:29.040 { 00:09:29.040 "params": { 00:09:29.040 "name": "Nvme$subsystem", 00:09:29.040 "trtype": "$TEST_TRANSPORT", 00:09:29.040 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:29.040 "adrfam": "ipv4", 00:09:29.040 "trsvcid": "$NVMF_PORT", 00:09:29.040 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:29.040 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:29.040 "hdgst": ${hdgst:-false}, 00:09:29.040 "ddgst": ${ddgst:-false} 00:09:29.040 }, 00:09:29.040 "method": "bdev_nvme_attach_controller" 00:09:29.040 } 00:09:29.040 EOF 00:09:29.040 )") 00:09:29.040 05:08:18 -- nvmf/common.sh@542 -- # cat 00:09:29.040 05:08:18 -- nvmf/common.sh@544 -- # jq . 00:09:29.040 05:08:18 -- nvmf/common.sh@545 -- # IFS=, 00:09:29.040 05:08:18 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:29.040 "params": { 00:09:29.040 "name": "Nvme0", 00:09:29.040 "trtype": "tcp", 00:09:29.040 "traddr": "10.0.0.2", 00:09:29.040 "adrfam": "ipv4", 00:09:29.040 "trsvcid": "4420", 00:09:29.040 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:29.040 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:29.040 "hdgst": false, 00:09:29.040 "ddgst": false 00:09:29.040 }, 00:09:29.040 "method": "bdev_nvme_attach_controller" 00:09:29.040 }' 00:09:29.298 [2024-12-08 05:08:18.820866] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:29.298 [2024-12-08 05:08:18.820977] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72200 ] 00:09:29.298 [2024-12-08 05:08:18.962118] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.298 [2024-12-08 05:08:19.004303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.556 Running I/O for 1 seconds... 00:09:30.488 00:09:30.488 Latency(us) 00:09:30.488 [2024-12-08T05:08:20.274Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:30.488 [2024-12-08T05:08:20.274Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:30.488 Verification LBA range: start 0x0 length 0x400 00:09:30.488 Nvme0n1 : 1.02 2330.03 145.63 0.00 0.00 27001.25 2189.50 31933.91 00:09:30.489 [2024-12-08T05:08:20.275Z] =================================================================================================================== 00:09:30.489 [2024-12-08T05:08:20.275Z] Total : 2330.03 145.63 0.00 0.00 27001.25 2189.50 31933.91 00:09:30.747 05:08:20 -- target/host_management.sh@101 -- # stoptarget 00:09:30.747 05:08:20 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:09:30.747 05:08:20 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:09:30.747 05:08:20 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:30.747 05:08:20 -- target/host_management.sh@40 -- # nvmftestfini 00:09:30.747 05:08:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:30.747 05:08:20 -- nvmf/common.sh@116 -- # sync 00:09:30.747 05:08:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:30.747 05:08:20 -- nvmf/common.sh@119 -- # set +e 00:09:30.747 05:08:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:30.747 05:08:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:30.747 rmmod nvme_tcp 00:09:30.747 rmmod nvme_fabrics 00:09:30.747 rmmod nvme_keyring 00:09:30.747 05:08:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:30.747 05:08:20 -- nvmf/common.sh@123 -- # set -e 00:09:30.747 05:08:20 -- nvmf/common.sh@124 -- # return 0 00:09:30.747 05:08:20 -- nvmf/common.sh@477 -- # '[' -n 72097 ']' 00:09:30.747 05:08:20 -- nvmf/common.sh@478 -- # killprocess 72097 00:09:30.747 05:08:20 -- common/autotest_common.sh@936 -- # '[' -z 72097 ']' 00:09:30.747 05:08:20 -- common/autotest_common.sh@940 -- # kill -0 72097 00:09:30.747 05:08:20 -- common/autotest_common.sh@941 -- # uname 00:09:30.747 05:08:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:30.747 05:08:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72097 00:09:30.747 killing process with pid 72097 00:09:30.747 05:08:20 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:30.747 05:08:20 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:30.747 05:08:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72097' 00:09:30.747 05:08:20 -- common/autotest_common.sh@955 -- # kill 72097 00:09:30.747 05:08:20 -- common/autotest_common.sh@960 -- # wait 72097 00:09:31.018 [2024-12-08 05:08:20.622629] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:09:31.018 05:08:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:31.018 05:08:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:31.018 05:08:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:31.018 05:08:20 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:31.018 05:08:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:31.018 05:08:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:31.018 05:08:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:31.018 05:08:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:31.018 05:08:20 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:31.018 00:09:31.018 real 0m5.579s 00:09:31.018 user 0m23.884s 00:09:31.018 sys 0m1.354s 00:09:31.018 ************************************ 00:09:31.018 END TEST nvmf_host_management 00:09:31.018 ************************************ 00:09:31.018 05:08:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:31.018 05:08:20 -- common/autotest_common.sh@10 -- # set +x 00:09:31.018 05:08:20 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:09:31.018 00:09:31.018 real 0m6.339s 00:09:31.018 user 0m24.070s 00:09:31.018 sys 0m1.599s 00:09:31.018 05:08:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:31.018 05:08:20 -- common/autotest_common.sh@10 -- # set +x 00:09:31.018 ************************************ 00:09:31.018 END TEST nvmf_host_management 00:09:31.018 ************************************ 00:09:31.018 05:08:20 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:31.018 05:08:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:31.018 05:08:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:31.018 05:08:20 -- common/autotest_common.sh@10 -- # set +x 00:09:31.018 ************************************ 00:09:31.018 START TEST nvmf_lvol 00:09:31.018 ************************************ 00:09:31.018 05:08:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:31.277 * Looking for test storage... 00:09:31.277 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:31.277 05:08:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:31.277 05:08:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:31.277 05:08:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:31.277 05:08:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:31.277 05:08:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:31.277 05:08:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:31.277 05:08:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:31.277 05:08:20 -- scripts/common.sh@335 -- # IFS=.-: 00:09:31.277 05:08:20 -- scripts/common.sh@335 -- # read -ra ver1 00:09:31.277 05:08:20 -- scripts/common.sh@336 -- # IFS=.-: 00:09:31.277 05:08:20 -- scripts/common.sh@336 -- # read -ra ver2 00:09:31.277 05:08:20 -- scripts/common.sh@337 -- # local 'op=<' 00:09:31.277 05:08:20 -- scripts/common.sh@339 -- # ver1_l=2 00:09:31.277 05:08:20 -- scripts/common.sh@340 -- # ver2_l=1 00:09:31.277 05:08:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:31.277 05:08:20 -- scripts/common.sh@343 -- # case "$op" in 00:09:31.277 05:08:20 -- scripts/common.sh@344 -- # : 1 00:09:31.277 05:08:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:31.277 05:08:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:31.277 05:08:20 -- scripts/common.sh@364 -- # decimal 1 00:09:31.277 05:08:20 -- scripts/common.sh@352 -- # local d=1 00:09:31.277 05:08:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:31.277 05:08:20 -- scripts/common.sh@354 -- # echo 1 00:09:31.277 05:08:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:31.277 05:08:20 -- scripts/common.sh@365 -- # decimal 2 00:09:31.277 05:08:20 -- scripts/common.sh@352 -- # local d=2 00:09:31.277 05:08:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:31.277 05:08:20 -- scripts/common.sh@354 -- # echo 2 00:09:31.277 05:08:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:31.277 05:08:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:31.277 05:08:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:31.277 05:08:20 -- scripts/common.sh@367 -- # return 0 00:09:31.277 05:08:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:31.277 05:08:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:31.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.277 --rc genhtml_branch_coverage=1 00:09:31.277 --rc genhtml_function_coverage=1 00:09:31.277 --rc genhtml_legend=1 00:09:31.277 --rc geninfo_all_blocks=1 00:09:31.277 --rc geninfo_unexecuted_blocks=1 00:09:31.277 00:09:31.277 ' 00:09:31.277 05:08:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:31.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.277 --rc genhtml_branch_coverage=1 00:09:31.277 --rc genhtml_function_coverage=1 00:09:31.277 --rc genhtml_legend=1 00:09:31.277 --rc geninfo_all_blocks=1 00:09:31.277 --rc geninfo_unexecuted_blocks=1 00:09:31.277 00:09:31.277 ' 00:09:31.277 05:08:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:31.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.277 --rc genhtml_branch_coverage=1 00:09:31.277 --rc genhtml_function_coverage=1 00:09:31.277 --rc genhtml_legend=1 00:09:31.277 --rc geninfo_all_blocks=1 00:09:31.277 --rc geninfo_unexecuted_blocks=1 00:09:31.277 00:09:31.277 ' 00:09:31.277 05:08:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:31.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.277 --rc genhtml_branch_coverage=1 00:09:31.277 --rc genhtml_function_coverage=1 00:09:31.277 --rc genhtml_legend=1 00:09:31.277 --rc geninfo_all_blocks=1 00:09:31.277 --rc geninfo_unexecuted_blocks=1 00:09:31.277 00:09:31.277 ' 00:09:31.277 05:08:20 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:31.277 05:08:20 -- nvmf/common.sh@7 -- # uname -s 00:09:31.277 05:08:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:31.277 05:08:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:31.277 05:08:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:31.277 05:08:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:31.277 05:08:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:31.277 05:08:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:31.277 05:08:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:31.277 05:08:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:31.277 05:08:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:31.277 05:08:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:31.277 05:08:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:09:31.277 05:08:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:09:31.277 05:08:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:31.277 05:08:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:31.277 05:08:20 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:31.277 05:08:20 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:31.277 05:08:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:31.277 05:08:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:31.277 05:08:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:31.277 05:08:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.277 05:08:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.277 05:08:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.277 05:08:20 -- paths/export.sh@5 -- # export PATH 00:09:31.277 05:08:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.277 05:08:20 -- nvmf/common.sh@46 -- # : 0 00:09:31.277 05:08:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:31.277 05:08:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:31.277 05:08:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:31.277 05:08:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:31.277 05:08:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:31.277 05:08:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:31.277 05:08:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:31.277 05:08:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:31.277 05:08:20 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:31.277 05:08:20 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:31.277 05:08:20 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:31.277 05:08:20 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:31.277 05:08:20 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:31.277 05:08:20 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:31.278 05:08:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:31.278 05:08:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:31.278 05:08:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:31.278 05:08:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:31.278 05:08:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:31.278 05:08:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:31.278 05:08:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:31.278 05:08:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:31.278 05:08:20 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:31.278 05:08:20 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:31.278 05:08:20 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:31.278 05:08:20 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:31.278 05:08:20 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:31.278 05:08:20 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:31.278 05:08:20 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:31.278 05:08:20 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:31.278 05:08:20 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:31.278 05:08:20 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:31.278 05:08:20 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:31.278 05:08:20 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:31.278 05:08:20 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:31.278 05:08:20 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:31.278 05:08:20 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:31.278 05:08:20 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:31.278 05:08:20 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:31.278 05:08:20 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:31.278 05:08:20 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:31.278 05:08:20 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:31.278 Cannot find device "nvmf_tgt_br" 00:09:31.278 05:08:21 -- nvmf/common.sh@154 -- # true 00:09:31.278 05:08:21 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:31.278 Cannot find device "nvmf_tgt_br2" 00:09:31.278 05:08:21 -- nvmf/common.sh@155 -- # true 00:09:31.278 05:08:21 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:31.278 05:08:21 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:31.278 Cannot find device "nvmf_tgt_br" 00:09:31.278 05:08:21 -- nvmf/common.sh@157 -- # true 00:09:31.278 05:08:21 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:31.278 Cannot find device "nvmf_tgt_br2" 00:09:31.278 05:08:21 -- nvmf/common.sh@158 -- # true 00:09:31.278 05:08:21 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:31.536 05:08:21 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:31.536 05:08:21 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:31.536 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:31.536 05:08:21 -- nvmf/common.sh@161 -- # true 00:09:31.536 05:08:21 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:31.536 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:31.536 05:08:21 -- nvmf/common.sh@162 -- # true 00:09:31.536 05:08:21 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:31.536 05:08:21 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:31.536 05:08:21 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:31.536 05:08:21 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:31.536 05:08:21 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:31.536 05:08:21 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:31.536 05:08:21 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:31.536 05:08:21 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:31.536 05:08:21 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:31.536 05:08:21 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:31.536 05:08:21 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:31.536 05:08:21 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:31.536 05:08:21 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:31.536 05:08:21 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:31.536 05:08:21 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:31.536 05:08:21 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:31.536 05:08:21 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:31.536 05:08:21 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:31.536 05:08:21 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:31.536 05:08:21 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:31.536 05:08:21 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:31.536 05:08:21 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:31.536 05:08:21 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:31.536 05:08:21 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:31.536 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:31.536 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:09:31.536 00:09:31.536 --- 10.0.0.2 ping statistics --- 00:09:31.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.536 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:09:31.536 05:08:21 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:31.536 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:31.536 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:09:31.536 00:09:31.536 --- 10.0.0.3 ping statistics --- 00:09:31.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.536 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:09:31.536 05:08:21 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:31.794 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:31.794 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:09:31.794 00:09:31.794 --- 10.0.0.1 ping statistics --- 00:09:31.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.794 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:09:31.794 05:08:21 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:31.794 05:08:21 -- nvmf/common.sh@421 -- # return 0 00:09:31.794 05:08:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:31.794 05:08:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:31.794 05:08:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:31.794 05:08:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:31.794 05:08:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:31.794 05:08:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:31.794 05:08:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:31.794 05:08:21 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:31.794 05:08:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:31.794 05:08:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:31.794 05:08:21 -- common/autotest_common.sh@10 -- # set +x 00:09:31.794 05:08:21 -- nvmf/common.sh@469 -- # nvmfpid=72436 00:09:31.794 05:08:21 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:31.794 05:08:21 -- nvmf/common.sh@470 -- # waitforlisten 72436 00:09:31.794 05:08:21 -- common/autotest_common.sh@829 -- # '[' -z 72436 ']' 00:09:31.794 05:08:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.794 05:08:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:31.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.794 05:08:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.794 05:08:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:31.794 05:08:21 -- common/autotest_common.sh@10 -- # set +x 00:09:31.794 [2024-12-08 05:08:21.396025] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:31.794 [2024-12-08 05:08:21.396136] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:31.794 [2024-12-08 05:08:21.535214] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:31.794 [2024-12-08 05:08:21.576747] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:31.794 [2024-12-08 05:08:21.576898] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:31.794 [2024-12-08 05:08:21.576912] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:31.794 [2024-12-08 05:08:21.576921] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:31.794 [2024-12-08 05:08:21.577081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:31.794 [2024-12-08 05:08:21.577367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:31.794 [2024-12-08 05:08:21.577382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.726 05:08:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:32.726 05:08:22 -- common/autotest_common.sh@862 -- # return 0 00:09:32.726 05:08:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:32.726 05:08:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:32.726 05:08:22 -- common/autotest_common.sh@10 -- # set +x 00:09:32.726 05:08:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:32.726 05:08:22 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:32.984 [2024-12-08 05:08:22.719556] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:32.984 05:08:22 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:33.548 05:08:23 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:33.548 05:08:23 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:33.807 05:08:23 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:33.807 05:08:23 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:34.064 05:08:23 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:34.630 05:08:24 -- target/nvmf_lvol.sh@29 -- # lvs=bcc538bb-f475-446b-abdd-066811844fb2 00:09:34.630 05:08:24 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u bcc538bb-f475-446b-abdd-066811844fb2 lvol 20 00:09:34.888 05:08:24 -- target/nvmf_lvol.sh@32 -- # lvol=af0587a3-5632-4dc4-bc93-5b4502d5b6ae 00:09:34.888 05:08:24 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:35.146 05:08:24 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 af0587a3-5632-4dc4-bc93-5b4502d5b6ae 00:09:35.405 05:08:24 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:35.662 [2024-12-08 05:08:25.289742] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:35.662 05:08:25 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:35.920 05:08:25 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:35.920 05:08:25 -- target/nvmf_lvol.sh@42 -- # perf_pid=72517 00:09:35.920 05:08:25 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:36.854 05:08:26 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot af0587a3-5632-4dc4-bc93-5b4502d5b6ae MY_SNAPSHOT 00:09:37.420 05:08:26 -- target/nvmf_lvol.sh@47 -- # snapshot=a98569e4-0059-4636-a0fe-2e32b0192541 00:09:37.420 05:08:26 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize af0587a3-5632-4dc4-bc93-5b4502d5b6ae 30 00:09:37.677 05:08:27 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone a98569e4-0059-4636-a0fe-2e32b0192541 MY_CLONE 00:09:37.935 05:08:27 -- target/nvmf_lvol.sh@49 -- # clone=2f1aae98-6f5e-4e52-a36e-225c0f82d186 00:09:37.935 05:08:27 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 2f1aae98-6f5e-4e52-a36e-225c0f82d186 00:09:38.553 05:08:28 -- target/nvmf_lvol.sh@53 -- # wait 72517 00:09:46.656 Initializing NVMe Controllers 00:09:46.656 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:46.656 Controller IO queue size 128, less than required. 00:09:46.656 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:46.656 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:46.656 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:46.656 Initialization complete. Launching workers. 00:09:46.656 ======================================================== 00:09:46.656 Latency(us) 00:09:46.656 Device Information : IOPS MiB/s Average min max 00:09:46.656 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9370.50 36.60 13661.35 1988.47 71633.45 00:09:46.656 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9420.40 36.80 13589.94 2136.53 62686.25 00:09:46.656 ======================================================== 00:09:46.656 Total : 18790.89 73.40 13625.55 1988.47 71633.45 00:09:46.656 00:09:46.656 05:08:35 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:46.656 05:08:36 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete af0587a3-5632-4dc4-bc93-5b4502d5b6ae 00:09:46.914 05:08:36 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bcc538bb-f475-446b-abdd-066811844fb2 00:09:47.173 05:08:36 -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:47.173 05:08:36 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:47.173 05:08:36 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:47.173 05:08:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:47.173 05:08:36 -- nvmf/common.sh@116 -- # sync 00:09:47.173 05:08:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:47.173 05:08:36 -- nvmf/common.sh@119 -- # set +e 00:09:47.173 05:08:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:47.173 05:08:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:47.173 rmmod nvme_tcp 00:09:47.173 rmmod nvme_fabrics 00:09:47.173 rmmod nvme_keyring 00:09:47.173 05:08:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:47.173 05:08:36 -- nvmf/common.sh@123 -- # set -e 00:09:47.173 05:08:36 -- nvmf/common.sh@124 -- # return 0 00:09:47.173 05:08:36 -- nvmf/common.sh@477 -- # '[' -n 72436 ']' 00:09:47.173 05:08:36 -- nvmf/common.sh@478 -- # killprocess 72436 00:09:47.173 05:08:36 -- common/autotest_common.sh@936 -- # '[' -z 72436 ']' 00:09:47.173 05:08:36 -- common/autotest_common.sh@940 -- # kill -0 72436 00:09:47.173 05:08:36 -- common/autotest_common.sh@941 -- # uname 00:09:47.173 05:08:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:47.173 05:08:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72436 00:09:47.173 05:08:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:47.173 killing process with pid 72436 00:09:47.173 05:08:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:47.173 05:08:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72436' 00:09:47.173 05:08:36 -- common/autotest_common.sh@955 -- # kill 72436 00:09:47.173 05:08:36 -- common/autotest_common.sh@960 -- # wait 72436 00:09:47.431 05:08:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:47.431 05:08:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:47.431 05:08:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:47.431 05:08:37 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:47.431 05:08:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:47.431 05:08:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.431 05:08:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:47.431 05:08:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.431 05:08:37 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:47.431 00:09:47.431 real 0m16.320s 00:09:47.431 user 1m7.306s 00:09:47.431 sys 0m4.794s 00:09:47.431 05:08:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:47.431 05:08:37 -- common/autotest_common.sh@10 -- # set +x 00:09:47.431 ************************************ 00:09:47.431 END TEST nvmf_lvol 00:09:47.431 ************************************ 00:09:47.431 05:08:37 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:47.431 05:08:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:47.431 05:08:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:47.431 05:08:37 -- common/autotest_common.sh@10 -- # set +x 00:09:47.431 ************************************ 00:09:47.431 START TEST nvmf_lvs_grow 00:09:47.431 ************************************ 00:09:47.431 05:08:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:47.690 * Looking for test storage... 00:09:47.690 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:47.690 05:08:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:47.690 05:08:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:47.690 05:08:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:47.690 05:08:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:47.690 05:08:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:47.690 05:08:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:47.690 05:08:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:47.690 05:08:37 -- scripts/common.sh@335 -- # IFS=.-: 00:09:47.690 05:08:37 -- scripts/common.sh@335 -- # read -ra ver1 00:09:47.690 05:08:37 -- scripts/common.sh@336 -- # IFS=.-: 00:09:47.690 05:08:37 -- scripts/common.sh@336 -- # read -ra ver2 00:09:47.690 05:08:37 -- scripts/common.sh@337 -- # local 'op=<' 00:09:47.690 05:08:37 -- scripts/common.sh@339 -- # ver1_l=2 00:09:47.690 05:08:37 -- scripts/common.sh@340 -- # ver2_l=1 00:09:47.690 05:08:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:47.690 05:08:37 -- scripts/common.sh@343 -- # case "$op" in 00:09:47.690 05:08:37 -- scripts/common.sh@344 -- # : 1 00:09:47.690 05:08:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:47.690 05:08:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:47.690 05:08:37 -- scripts/common.sh@364 -- # decimal 1 00:09:47.690 05:08:37 -- scripts/common.sh@352 -- # local d=1 00:09:47.690 05:08:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:47.690 05:08:37 -- scripts/common.sh@354 -- # echo 1 00:09:47.690 05:08:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:47.690 05:08:37 -- scripts/common.sh@365 -- # decimal 2 00:09:47.690 05:08:37 -- scripts/common.sh@352 -- # local d=2 00:09:47.690 05:08:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:47.690 05:08:37 -- scripts/common.sh@354 -- # echo 2 00:09:47.690 05:08:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:47.690 05:08:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:47.690 05:08:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:47.690 05:08:37 -- scripts/common.sh@367 -- # return 0 00:09:47.690 05:08:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:47.690 05:08:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:47.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.690 --rc genhtml_branch_coverage=1 00:09:47.690 --rc genhtml_function_coverage=1 00:09:47.690 --rc genhtml_legend=1 00:09:47.690 --rc geninfo_all_blocks=1 00:09:47.690 --rc geninfo_unexecuted_blocks=1 00:09:47.690 00:09:47.690 ' 00:09:47.690 05:08:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:47.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.690 --rc genhtml_branch_coverage=1 00:09:47.690 --rc genhtml_function_coverage=1 00:09:47.690 --rc genhtml_legend=1 00:09:47.690 --rc geninfo_all_blocks=1 00:09:47.690 --rc geninfo_unexecuted_blocks=1 00:09:47.690 00:09:47.690 ' 00:09:47.690 05:08:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:47.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.690 --rc genhtml_branch_coverage=1 00:09:47.690 --rc genhtml_function_coverage=1 00:09:47.690 --rc genhtml_legend=1 00:09:47.690 --rc geninfo_all_blocks=1 00:09:47.690 --rc geninfo_unexecuted_blocks=1 00:09:47.690 00:09:47.690 ' 00:09:47.690 05:08:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:47.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.690 --rc genhtml_branch_coverage=1 00:09:47.690 --rc genhtml_function_coverage=1 00:09:47.690 --rc genhtml_legend=1 00:09:47.690 --rc geninfo_all_blocks=1 00:09:47.690 --rc geninfo_unexecuted_blocks=1 00:09:47.690 00:09:47.690 ' 00:09:47.690 05:08:37 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:47.690 05:08:37 -- nvmf/common.sh@7 -- # uname -s 00:09:47.690 05:08:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:47.690 05:08:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:47.690 05:08:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:47.690 05:08:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:47.690 05:08:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:47.690 05:08:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:47.690 05:08:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:47.690 05:08:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:47.690 05:08:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:47.690 05:08:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:47.690 05:08:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:09:47.690 05:08:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:09:47.690 05:08:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:47.690 05:08:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:47.690 05:08:37 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:47.690 05:08:37 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:47.690 05:08:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:47.690 05:08:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:47.691 05:08:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:47.691 05:08:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.691 05:08:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.691 05:08:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.691 05:08:37 -- paths/export.sh@5 -- # export PATH 00:09:47.691 05:08:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.691 05:08:37 -- nvmf/common.sh@46 -- # : 0 00:09:47.691 05:08:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:47.691 05:08:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:47.691 05:08:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:47.691 05:08:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:47.691 05:08:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:47.691 05:08:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:47.691 05:08:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:47.691 05:08:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:47.691 05:08:37 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:47.691 05:08:37 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:47.691 05:08:37 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:09:47.691 05:08:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:47.691 05:08:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:47.691 05:08:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:47.691 05:08:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:47.691 05:08:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:47.691 05:08:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.691 05:08:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:47.691 05:08:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.691 05:08:37 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:47.691 05:08:37 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:47.691 05:08:37 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:47.691 05:08:37 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:47.691 05:08:37 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:47.691 05:08:37 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:47.691 05:08:37 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:47.691 05:08:37 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:47.691 05:08:37 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:47.691 05:08:37 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:47.691 05:08:37 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:47.691 05:08:37 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:47.691 05:08:37 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:47.691 05:08:37 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:47.691 05:08:37 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:47.691 05:08:37 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:47.691 05:08:37 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:47.691 05:08:37 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:47.691 05:08:37 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:47.691 05:08:37 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:47.691 Cannot find device "nvmf_tgt_br" 00:09:47.691 05:08:37 -- nvmf/common.sh@154 -- # true 00:09:47.691 05:08:37 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:47.691 Cannot find device "nvmf_tgt_br2" 00:09:47.691 05:08:37 -- nvmf/common.sh@155 -- # true 00:09:47.691 05:08:37 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:47.691 05:08:37 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:47.691 Cannot find device "nvmf_tgt_br" 00:09:47.691 05:08:37 -- nvmf/common.sh@157 -- # true 00:09:47.691 05:08:37 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:47.691 Cannot find device "nvmf_tgt_br2" 00:09:47.691 05:08:37 -- nvmf/common.sh@158 -- # true 00:09:47.691 05:08:37 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:47.691 05:08:37 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:47.950 05:08:37 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:47.950 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:47.950 05:08:37 -- nvmf/common.sh@161 -- # true 00:09:47.950 05:08:37 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:47.950 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:47.950 05:08:37 -- nvmf/common.sh@162 -- # true 00:09:47.950 05:08:37 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:47.950 05:08:37 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:47.950 05:08:37 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:47.950 05:08:37 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:47.950 05:08:37 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:47.950 05:08:37 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:47.950 05:08:37 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:47.950 05:08:37 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:47.950 05:08:37 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:47.950 05:08:37 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:47.950 05:08:37 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:47.950 05:08:37 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:47.950 05:08:37 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:47.950 05:08:37 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:47.950 05:08:37 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:47.950 05:08:37 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:47.950 05:08:37 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:47.950 05:08:37 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:47.950 05:08:37 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:47.950 05:08:37 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:47.950 05:08:37 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:47.950 05:08:37 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:47.950 05:08:37 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:47.950 05:08:37 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:47.950 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:47.950 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:09:47.950 00:09:47.950 --- 10.0.0.2 ping statistics --- 00:09:47.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.950 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:09:47.950 05:08:37 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:47.950 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:47.950 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:09:47.950 00:09:47.950 --- 10.0.0.3 ping statistics --- 00:09:47.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.950 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:09:47.950 05:08:37 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:47.950 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:47.950 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:09:47.950 00:09:47.950 --- 10.0.0.1 ping statistics --- 00:09:47.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.950 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:09:47.950 05:08:37 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:47.950 05:08:37 -- nvmf/common.sh@421 -- # return 0 00:09:47.950 05:08:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:47.950 05:08:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:47.950 05:08:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:47.950 05:08:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:47.950 05:08:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:47.950 05:08:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:47.950 05:08:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:47.950 05:08:37 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:09:47.950 05:08:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:47.950 05:08:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:47.950 05:08:37 -- common/autotest_common.sh@10 -- # set +x 00:09:47.950 05:08:37 -- nvmf/common.sh@469 -- # nvmfpid=72848 00:09:47.950 05:08:37 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:47.950 05:08:37 -- nvmf/common.sh@470 -- # waitforlisten 72848 00:09:47.950 05:08:37 -- common/autotest_common.sh@829 -- # '[' -z 72848 ']' 00:09:47.950 05:08:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.950 05:08:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:47.950 05:08:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.950 05:08:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:47.950 05:08:37 -- common/autotest_common.sh@10 -- # set +x 00:09:48.208 [2024-12-08 05:08:37.779404] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:48.208 [2024-12-08 05:08:37.779501] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:48.208 [2024-12-08 05:08:37.921256] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.208 [2024-12-08 05:08:37.964441] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:48.208 [2024-12-08 05:08:37.964610] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:48.208 [2024-12-08 05:08:37.964625] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:48.208 [2024-12-08 05:08:37.964635] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:48.208 [2024-12-08 05:08:37.964665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.467 05:08:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:48.467 05:08:38 -- common/autotest_common.sh@862 -- # return 0 00:09:48.467 05:08:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:48.467 05:08:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:48.467 05:08:38 -- common/autotest_common.sh@10 -- # set +x 00:09:48.467 05:08:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:48.467 05:08:38 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:48.725 [2024-12-08 05:08:38.387168] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:48.725 05:08:38 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:09:48.725 05:08:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:48.725 05:08:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:48.725 05:08:38 -- common/autotest_common.sh@10 -- # set +x 00:09:48.725 ************************************ 00:09:48.725 START TEST lvs_grow_clean 00:09:48.725 ************************************ 00:09:48.725 05:08:38 -- common/autotest_common.sh@1114 -- # lvs_grow 00:09:48.725 05:08:38 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:48.725 05:08:38 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:48.725 05:08:38 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:48.725 05:08:38 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:48.725 05:08:38 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:48.725 05:08:38 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:48.725 05:08:38 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:48.725 05:08:38 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:48.725 05:08:38 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:48.982 05:08:38 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:48.982 05:08:38 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:49.549 05:08:39 -- target/nvmf_lvs_grow.sh@28 -- # lvs=b574472e-0d51-4771-8cf6-c40083af3b7d 00:09:49.549 05:08:39 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:49.549 05:08:39 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b574472e-0d51-4771-8cf6-c40083af3b7d 00:09:49.808 05:08:39 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:49.808 05:08:39 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:49.808 05:08:39 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b574472e-0d51-4771-8cf6-c40083af3b7d lvol 150 00:09:50.067 05:08:39 -- target/nvmf_lvs_grow.sh@33 -- # lvol=7e52b6c9-3960-4028-9bbb-d736bce84b06 00:09:50.067 05:08:39 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:50.067 05:08:39 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:50.325 [2024-12-08 05:08:40.004756] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:50.325 [2024-12-08 05:08:40.004837] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:50.325 true 00:09:50.325 05:08:40 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b574472e-0d51-4771-8cf6-c40083af3b7d 00:09:50.325 05:08:40 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:50.582 05:08:40 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:50.582 05:08:40 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:51.200 05:08:40 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7e52b6c9-3960-4028-9bbb-d736bce84b06 00:09:51.200 05:08:40 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:51.459 [2024-12-08 05:08:41.237557] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:51.717 05:08:41 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:51.975 05:08:41 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=72928 00:09:51.975 05:08:41 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:51.975 05:08:41 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:51.975 05:08:41 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 72928 /var/tmp/bdevperf.sock 00:09:51.975 05:08:41 -- common/autotest_common.sh@829 -- # '[' -z 72928 ']' 00:09:51.975 05:08:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:51.975 05:08:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:51.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:51.975 05:08:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:51.975 05:08:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:51.975 05:08:41 -- common/autotest_common.sh@10 -- # set +x 00:09:51.975 [2024-12-08 05:08:41.572302] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:51.975 [2024-12-08 05:08:41.572457] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72928 ] 00:09:51.975 [2024-12-08 05:08:41.714731] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.975 [2024-12-08 05:08:41.753901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:52.905 05:08:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:52.905 05:08:42 -- common/autotest_common.sh@862 -- # return 0 00:09:52.905 05:08:42 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:53.163 Nvme0n1 00:09:53.163 05:08:42 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:53.422 [ 00:09:53.422 { 00:09:53.422 "name": "Nvme0n1", 00:09:53.422 "aliases": [ 00:09:53.422 "7e52b6c9-3960-4028-9bbb-d736bce84b06" 00:09:53.422 ], 00:09:53.422 "product_name": "NVMe disk", 00:09:53.422 "block_size": 4096, 00:09:53.422 "num_blocks": 38912, 00:09:53.422 "uuid": "7e52b6c9-3960-4028-9bbb-d736bce84b06", 00:09:53.422 "assigned_rate_limits": { 00:09:53.422 "rw_ios_per_sec": 0, 00:09:53.422 "rw_mbytes_per_sec": 0, 00:09:53.422 "r_mbytes_per_sec": 0, 00:09:53.422 "w_mbytes_per_sec": 0 00:09:53.422 }, 00:09:53.422 "claimed": false, 00:09:53.422 "zoned": false, 00:09:53.422 "supported_io_types": { 00:09:53.422 "read": true, 00:09:53.422 "write": true, 00:09:53.422 "unmap": true, 00:09:53.422 "write_zeroes": true, 00:09:53.422 "flush": true, 00:09:53.422 "reset": true, 00:09:53.422 "compare": true, 00:09:53.422 "compare_and_write": true, 00:09:53.422 "abort": true, 00:09:53.422 "nvme_admin": true, 00:09:53.422 "nvme_io": true 00:09:53.422 }, 00:09:53.422 "driver_specific": { 00:09:53.422 "nvme": [ 00:09:53.422 { 00:09:53.422 "trid": { 00:09:53.422 "trtype": "TCP", 00:09:53.422 "adrfam": "IPv4", 00:09:53.422 "traddr": "10.0.0.2", 00:09:53.422 "trsvcid": "4420", 00:09:53.422 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:53.422 }, 00:09:53.422 "ctrlr_data": { 00:09:53.422 "cntlid": 1, 00:09:53.422 "vendor_id": "0x8086", 00:09:53.422 "model_number": "SPDK bdev Controller", 00:09:53.422 "serial_number": "SPDK0", 00:09:53.422 "firmware_revision": "24.01.1", 00:09:53.422 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:53.422 "oacs": { 00:09:53.422 "security": 0, 00:09:53.422 "format": 0, 00:09:53.422 "firmware": 0, 00:09:53.422 "ns_manage": 0 00:09:53.422 }, 00:09:53.422 "multi_ctrlr": true, 00:09:53.422 "ana_reporting": false 00:09:53.422 }, 00:09:53.422 "vs": { 00:09:53.422 "nvme_version": "1.3" 00:09:53.422 }, 00:09:53.422 "ns_data": { 00:09:53.422 "id": 1, 00:09:53.422 "can_share": true 00:09:53.422 } 00:09:53.422 } 00:09:53.422 ], 00:09:53.422 "mp_policy": "active_passive" 00:09:53.422 } 00:09:53.422 } 00:09:53.422 ] 00:09:53.422 05:08:43 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=72952 00:09:53.422 05:08:43 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:53.422 05:08:43 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:53.688 Running I/O for 10 seconds... 00:09:54.623 Latency(us) 00:09:54.623 [2024-12-08T05:08:44.409Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:54.623 [2024-12-08T05:08:44.409Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:54.623 Nvme0n1 : 1.00 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:09:54.623 [2024-12-08T05:08:44.409Z] =================================================================================================================== 00:09:54.623 [2024-12-08T05:08:44.409Z] Total : 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:09:54.623 00:09:55.569 05:08:45 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b574472e-0d51-4771-8cf6-c40083af3b7d 00:09:55.569 [2024-12-08T05:08:45.355Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:55.569 Nvme0n1 : 2.00 6667.50 26.04 0.00 0.00 0.00 0.00 0.00 00:09:55.569 [2024-12-08T05:08:45.355Z] =================================================================================================================== 00:09:55.569 [2024-12-08T05:08:45.355Z] Total : 6667.50 26.04 0.00 0.00 0.00 0.00 0.00 00:09:55.569 00:09:55.826 true 00:09:55.826 05:08:45 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b574472e-0d51-4771-8cf6-c40083af3b7d 00:09:55.826 05:08:45 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:56.392 05:08:45 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:56.392 05:08:45 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:56.392 05:08:45 -- target/nvmf_lvs_grow.sh@65 -- # wait 72952 00:09:56.651 [2024-12-08T05:08:46.437Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:56.651 Nvme0n1 : 3.00 6688.67 26.13 0.00 0.00 0.00 0.00 0.00 00:09:56.651 [2024-12-08T05:08:46.437Z] =================================================================================================================== 00:09:56.651 [2024-12-08T05:08:46.437Z] Total : 6688.67 26.13 0.00 0.00 0.00 0.00 0.00 00:09:56.651 00:09:57.585 [2024-12-08T05:08:47.371Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:57.585 Nvme0n1 : 4.00 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:57.585 [2024-12-08T05:08:47.371Z] =================================================================================================================== 00:09:57.585 [2024-12-08T05:08:47.371Z] Total : 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:57.585 00:09:58.958 [2024-12-08T05:08:48.744Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:58.958 Nvme0n1 : 5.00 6705.60 26.19 0.00 0.00 0.00 0.00 0.00 00:09:58.958 [2024-12-08T05:08:48.744Z] =================================================================================================================== 00:09:58.958 [2024-12-08T05:08:48.744Z] Total : 6705.60 26.19 0.00 0.00 0.00 0.00 0.00 00:09:58.958 00:09:59.891 [2024-12-08T05:08:49.677Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:59.891 Nvme0n1 : 6.00 6625.17 25.88 0.00 0.00 0.00 0.00 0.00 00:09:59.891 [2024-12-08T05:08:49.677Z] =================================================================================================================== 00:09:59.891 [2024-12-08T05:08:49.677Z] Total : 6625.17 25.88 0.00 0.00 0.00 0.00 0.00 00:09:59.891 00:10:00.828 [2024-12-08T05:08:50.614Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:00.828 Nvme0n1 : 7.00 6567.71 25.66 0.00 0.00 0.00 0.00 0.00 00:10:00.828 [2024-12-08T05:08:50.614Z] =================================================================================================================== 00:10:00.828 [2024-12-08T05:08:50.614Z] Total : 6567.71 25.66 0.00 0.00 0.00 0.00 0.00 00:10:00.828 00:10:01.762 [2024-12-08T05:08:51.548Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:01.762 Nvme0n1 : 8.00 6524.62 25.49 0.00 0.00 0.00 0.00 0.00 00:10:01.762 [2024-12-08T05:08:51.548Z] =================================================================================================================== 00:10:01.762 [2024-12-08T05:08:51.548Z] Total : 6524.62 25.49 0.00 0.00 0.00 0.00 0.00 00:10:01.762 00:10:02.695 [2024-12-08T05:08:52.481Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:02.695 Nvme0n1 : 9.00 6381.22 24.93 0.00 0.00 0.00 0.00 0.00 00:10:02.695 [2024-12-08T05:08:52.481Z] =================================================================================================================== 00:10:02.695 [2024-12-08T05:08:52.481Z] Total : 6381.22 24.93 0.00 0.00 0.00 0.00 0.00 00:10:02.695 00:10:03.626 [2024-12-08T05:08:53.412Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:03.626 Nvme0n1 : 10.00 6390.80 24.96 0.00 0.00 0.00 0.00 0.00 00:10:03.627 [2024-12-08T05:08:53.413Z] =================================================================================================================== 00:10:03.627 [2024-12-08T05:08:53.413Z] Total : 6390.80 24.96 0.00 0.00 0.00 0.00 0.00 00:10:03.627 00:10:03.627 00:10:03.627 Latency(us) 00:10:03.627 [2024-12-08T05:08:53.413Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:03.627 [2024-12-08T05:08:53.413Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:03.627 Nvme0n1 : 10.02 6392.36 24.97 0.00 0.00 20017.58 10485.76 179211.17 00:10:03.627 [2024-12-08T05:08:53.413Z] =================================================================================================================== 00:10:03.627 [2024-12-08T05:08:53.413Z] Total : 6392.36 24.97 0.00 0.00 20017.58 10485.76 179211.17 00:10:03.627 0 00:10:03.627 05:08:53 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 72928 00:10:03.627 05:08:53 -- common/autotest_common.sh@936 -- # '[' -z 72928 ']' 00:10:03.627 05:08:53 -- common/autotest_common.sh@940 -- # kill -0 72928 00:10:03.627 05:08:53 -- common/autotest_common.sh@941 -- # uname 00:10:03.627 05:08:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:03.627 05:08:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72928 00:10:03.627 05:08:53 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:03.627 05:08:53 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:03.627 05:08:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72928' 00:10:03.627 killing process with pid 72928 00:10:03.627 05:08:53 -- common/autotest_common.sh@955 -- # kill 72928 00:10:03.627 Received shutdown signal, test time was about 10.000000 seconds 00:10:03.627 00:10:03.627 Latency(us) 00:10:03.627 [2024-12-08T05:08:53.413Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:03.627 [2024-12-08T05:08:53.413Z] =================================================================================================================== 00:10:03.627 [2024-12-08T05:08:53.413Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:03.627 05:08:53 -- common/autotest_common.sh@960 -- # wait 72928 00:10:03.885 05:08:53 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:04.142 05:08:53 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:10:04.142 05:08:53 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b574472e-0d51-4771-8cf6-c40083af3b7d 00:10:04.399 05:08:54 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:10:04.399 05:08:54 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:10:04.399 05:08:54 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:04.656 [2024-12-08 05:08:54.392331] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:04.656 05:08:54 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b574472e-0d51-4771-8cf6-c40083af3b7d 00:10:04.656 05:08:54 -- common/autotest_common.sh@650 -- # local es=0 00:10:04.656 05:08:54 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b574472e-0d51-4771-8cf6-c40083af3b7d 00:10:04.656 05:08:54 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:04.656 05:08:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:04.656 05:08:54 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:04.941 05:08:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:04.941 05:08:54 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:04.941 05:08:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:04.941 05:08:54 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:04.941 05:08:54 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:04.941 05:08:54 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b574472e-0d51-4771-8cf6-c40083af3b7d 00:10:04.941 request: 00:10:04.941 { 00:10:04.941 "uuid": "b574472e-0d51-4771-8cf6-c40083af3b7d", 00:10:04.941 "method": "bdev_lvol_get_lvstores", 00:10:04.941 "req_id": 1 00:10:04.941 } 00:10:04.941 Got JSON-RPC error response 00:10:04.941 response: 00:10:04.941 { 00:10:04.941 "code": -19, 00:10:04.941 "message": "No such device" 00:10:04.941 } 00:10:04.941 05:08:54 -- common/autotest_common.sh@653 -- # es=1 00:10:04.941 05:08:54 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:04.941 05:08:54 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:04.941 05:08:54 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:04.941 05:08:54 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:05.552 aio_bdev 00:10:05.552 05:08:55 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 7e52b6c9-3960-4028-9bbb-d736bce84b06 00:10:05.552 05:08:55 -- common/autotest_common.sh@897 -- # local bdev_name=7e52b6c9-3960-4028-9bbb-d736bce84b06 00:10:05.552 05:08:55 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:05.552 05:08:55 -- common/autotest_common.sh@899 -- # local i 00:10:05.552 05:08:55 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:05.552 05:08:55 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:05.552 05:08:55 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:05.811 05:08:55 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7e52b6c9-3960-4028-9bbb-d736bce84b06 -t 2000 00:10:05.811 [ 00:10:05.811 { 00:10:05.811 "name": "7e52b6c9-3960-4028-9bbb-d736bce84b06", 00:10:05.811 "aliases": [ 00:10:05.811 "lvs/lvol" 00:10:05.811 ], 00:10:05.811 "product_name": "Logical Volume", 00:10:05.811 "block_size": 4096, 00:10:05.811 "num_blocks": 38912, 00:10:05.811 "uuid": "7e52b6c9-3960-4028-9bbb-d736bce84b06", 00:10:05.811 "assigned_rate_limits": { 00:10:05.811 "rw_ios_per_sec": 0, 00:10:05.811 "rw_mbytes_per_sec": 0, 00:10:05.811 "r_mbytes_per_sec": 0, 00:10:05.811 "w_mbytes_per_sec": 0 00:10:05.811 }, 00:10:05.811 "claimed": false, 00:10:05.811 "zoned": false, 00:10:05.811 "supported_io_types": { 00:10:05.811 "read": true, 00:10:05.812 "write": true, 00:10:05.812 "unmap": true, 00:10:05.812 "write_zeroes": true, 00:10:05.812 "flush": false, 00:10:05.812 "reset": true, 00:10:05.812 "compare": false, 00:10:05.812 "compare_and_write": false, 00:10:05.812 "abort": false, 00:10:05.812 "nvme_admin": false, 00:10:05.812 "nvme_io": false 00:10:05.812 }, 00:10:05.812 "driver_specific": { 00:10:05.812 "lvol": { 00:10:05.812 "lvol_store_uuid": "b574472e-0d51-4771-8cf6-c40083af3b7d", 00:10:05.812 "base_bdev": "aio_bdev", 00:10:05.812 "thin_provision": false, 00:10:05.812 "snapshot": false, 00:10:05.812 "clone": false, 00:10:05.812 "esnap_clone": false 00:10:05.812 } 00:10:05.812 } 00:10:05.812 } 00:10:05.812 ] 00:10:05.812 05:08:55 -- common/autotest_common.sh@905 -- # return 0 00:10:06.069 05:08:55 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b574472e-0d51-4771-8cf6-c40083af3b7d 00:10:06.069 05:08:55 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:10:06.327 05:08:55 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:10:06.327 05:08:55 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b574472e-0d51-4771-8cf6-c40083af3b7d 00:10:06.327 05:08:55 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:10:06.584 05:08:56 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:10:06.584 05:08:56 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 7e52b6c9-3960-4028-9bbb-d736bce84b06 00:10:06.842 05:08:56 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b574472e-0d51-4771-8cf6-c40083af3b7d 00:10:07.099 05:08:56 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:07.664 05:08:57 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:07.922 ************************************ 00:10:07.922 END TEST lvs_grow_clean 00:10:07.922 ************************************ 00:10:07.922 00:10:07.922 real 0m19.208s 00:10:07.922 user 0m18.201s 00:10:07.922 sys 0m2.672s 00:10:07.922 05:08:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:07.922 05:08:57 -- common/autotest_common.sh@10 -- # set +x 00:10:07.922 05:08:57 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:10:07.922 05:08:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:07.922 05:08:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:07.922 05:08:57 -- common/autotest_common.sh@10 -- # set +x 00:10:07.922 ************************************ 00:10:07.922 START TEST lvs_grow_dirty 00:10:07.922 ************************************ 00:10:07.922 05:08:57 -- common/autotest_common.sh@1114 -- # lvs_grow dirty 00:10:07.922 05:08:57 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:07.922 05:08:57 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:07.922 05:08:57 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:07.922 05:08:57 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:07.922 05:08:57 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:07.922 05:08:57 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:07.922 05:08:57 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:07.922 05:08:57 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:07.922 05:08:57 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:08.487 05:08:58 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:08.487 05:08:58 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:08.744 05:08:58 -- target/nvmf_lvs_grow.sh@28 -- # lvs=bfa945c7-73d3-482e-acc9-c170d0018de1 00:10:08.744 05:08:58 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bfa945c7-73d3-482e-acc9-c170d0018de1 00:10:08.744 05:08:58 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:09.001 05:08:58 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:09.001 05:08:58 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:09.001 05:08:58 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u bfa945c7-73d3-482e-acc9-c170d0018de1 lvol 150 00:10:09.264 05:08:59 -- target/nvmf_lvs_grow.sh@33 -- # lvol=2427fe77-c4d4-44f8-babd-580a3715a7b2 00:10:09.264 05:08:59 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:09.264 05:08:59 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:09.829 [2024-12-08 05:08:59.382111] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:09.830 [2024-12-08 05:08:59.382246] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:09.830 true 00:10:09.830 05:08:59 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bfa945c7-73d3-482e-acc9-c170d0018de1 00:10:09.830 05:08:59 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:10.086 05:08:59 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:10.086 05:08:59 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:10.650 05:09:00 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2427fe77-c4d4-44f8-babd-580a3715a7b2 00:10:10.952 05:09:00 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:11.517 05:09:00 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:11.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:11.774 05:09:01 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=73214 00:10:11.774 05:09:01 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:11.774 05:09:01 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:11.774 05:09:01 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 73214 /var/tmp/bdevperf.sock 00:10:11.774 05:09:01 -- common/autotest_common.sh@829 -- # '[' -z 73214 ']' 00:10:11.774 05:09:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:11.774 05:09:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:11.774 05:09:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:11.774 05:09:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:11.774 05:09:01 -- common/autotest_common.sh@10 -- # set +x 00:10:11.774 [2024-12-08 05:09:01.405436] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:11.774 [2024-12-08 05:09:01.405578] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73214 ] 00:10:11.774 [2024-12-08 05:09:01.546460] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.032 [2024-12-08 05:09:01.588785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:12.032 05:09:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:12.032 05:09:01 -- common/autotest_common.sh@862 -- # return 0 00:10:12.032 05:09:01 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:12.596 Nvme0n1 00:10:12.596 05:09:02 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:12.854 [ 00:10:12.854 { 00:10:12.854 "name": "Nvme0n1", 00:10:12.854 "aliases": [ 00:10:12.854 "2427fe77-c4d4-44f8-babd-580a3715a7b2" 00:10:12.854 ], 00:10:12.854 "product_name": "NVMe disk", 00:10:12.854 "block_size": 4096, 00:10:12.854 "num_blocks": 38912, 00:10:12.854 "uuid": "2427fe77-c4d4-44f8-babd-580a3715a7b2", 00:10:12.854 "assigned_rate_limits": { 00:10:12.854 "rw_ios_per_sec": 0, 00:10:12.854 "rw_mbytes_per_sec": 0, 00:10:12.854 "r_mbytes_per_sec": 0, 00:10:12.854 "w_mbytes_per_sec": 0 00:10:12.854 }, 00:10:12.854 "claimed": false, 00:10:12.854 "zoned": false, 00:10:12.854 "supported_io_types": { 00:10:12.854 "read": true, 00:10:12.854 "write": true, 00:10:12.854 "unmap": true, 00:10:12.855 "write_zeroes": true, 00:10:12.855 "flush": true, 00:10:12.855 "reset": true, 00:10:12.855 "compare": true, 00:10:12.855 "compare_and_write": true, 00:10:12.855 "abort": true, 00:10:12.855 "nvme_admin": true, 00:10:12.855 "nvme_io": true 00:10:12.855 }, 00:10:12.855 "driver_specific": { 00:10:12.855 "nvme": [ 00:10:12.855 { 00:10:12.855 "trid": { 00:10:12.855 "trtype": "TCP", 00:10:12.855 "adrfam": "IPv4", 00:10:12.855 "traddr": "10.0.0.2", 00:10:12.855 "trsvcid": "4420", 00:10:12.855 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:12.855 }, 00:10:12.855 "ctrlr_data": { 00:10:12.855 "cntlid": 1, 00:10:12.855 "vendor_id": "0x8086", 00:10:12.855 "model_number": "SPDK bdev Controller", 00:10:12.855 "serial_number": "SPDK0", 00:10:12.855 "firmware_revision": "24.01.1", 00:10:12.855 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:12.855 "oacs": { 00:10:12.855 "security": 0, 00:10:12.855 "format": 0, 00:10:12.855 "firmware": 0, 00:10:12.855 "ns_manage": 0 00:10:12.855 }, 00:10:12.855 "multi_ctrlr": true, 00:10:12.855 "ana_reporting": false 00:10:12.855 }, 00:10:12.855 "vs": { 00:10:12.855 "nvme_version": "1.3" 00:10:12.855 }, 00:10:12.855 "ns_data": { 00:10:12.855 "id": 1, 00:10:12.855 "can_share": true 00:10:12.855 } 00:10:12.855 } 00:10:12.855 ], 00:10:12.855 "mp_policy": "active_passive" 00:10:12.855 } 00:10:12.855 } 00:10:12.855 ] 00:10:12.855 05:09:02 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=73230 00:10:12.855 05:09:02 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:12.855 05:09:02 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:13.113 Running I/O for 10 seconds... 00:10:14.047 Latency(us) 00:10:14.047 [2024-12-08T05:09:03.833Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:14.047 [2024-12-08T05:09:03.833Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:14.047 Nvme0n1 : 1.00 5969.00 23.32 0.00 0.00 0.00 0.00 0.00 00:10:14.047 [2024-12-08T05:09:03.833Z] =================================================================================================================== 00:10:14.047 [2024-12-08T05:09:03.833Z] Total : 5969.00 23.32 0.00 0.00 0.00 0.00 0.00 00:10:14.047 00:10:14.983 05:09:04 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u bfa945c7-73d3-482e-acc9-c170d0018de1 00:10:15.241 [2024-12-08T05:09:05.027Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:15.241 Nvme0n1 : 2.00 5651.50 22.08 0.00 0.00 0.00 0.00 0.00 00:10:15.241 [2024-12-08T05:09:05.027Z] =================================================================================================================== 00:10:15.241 [2024-12-08T05:09:05.027Z] Total : 5651.50 22.08 0.00 0.00 0.00 0.00 0.00 00:10:15.241 00:10:15.241 true 00:10:15.241 05:09:04 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bfa945c7-73d3-482e-acc9-c170d0018de1 00:10:15.241 05:09:04 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:15.498 05:09:05 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:15.498 05:09:05 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:15.498 05:09:05 -- target/nvmf_lvs_grow.sh@65 -- # wait 73230 00:10:16.062 [2024-12-08T05:09:05.848Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:16.062 Nvme0n1 : 3.00 5291.33 20.67 0.00 0.00 0.00 0.00 0.00 00:10:16.062 [2024-12-08T05:09:05.848Z] =================================================================================================================== 00:10:16.062 [2024-12-08T05:09:05.848Z] Total : 5291.33 20.67 0.00 0.00 0.00 0.00 0.00 00:10:16.062 00:10:17.441 [2024-12-08T05:09:07.227Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:17.441 Nvme0n1 : 4.00 5313.25 20.75 0.00 0.00 0.00 0.00 0.00 00:10:17.441 [2024-12-08T05:09:07.227Z] =================================================================================================================== 00:10:17.441 [2024-12-08T05:09:07.227Z] Total : 5313.25 20.75 0.00 0.00 0.00 0.00 0.00 00:10:17.441 00:10:18.006 [2024-12-08T05:09:07.792Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:18.006 Nvme0n1 : 5.00 5212.40 20.36 0.00 0.00 0.00 0.00 0.00 00:10:18.006 [2024-12-08T05:09:07.792Z] =================================================================================================================== 00:10:18.006 [2024-12-08T05:09:07.792Z] Total : 5212.40 20.36 0.00 0.00 0.00 0.00 0.00 00:10:18.006 00:10:19.408 [2024-12-08T05:09:09.195Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:19.409 Nvme0n1 : 6.00 5046.83 19.71 0.00 0.00 0.00 0.00 0.00 00:10:19.409 [2024-12-08T05:09:09.195Z] =================================================================================================================== 00:10:19.409 [2024-12-08T05:09:09.195Z] Total : 5046.83 19.71 0.00 0.00 0.00 0.00 0.00 00:10:19.409 00:10:20.341 [2024-12-08T05:09:10.127Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:20.341 Nvme0n1 : 7.00 5018.14 19.60 0.00 0.00 0.00 0.00 0.00 00:10:20.341 [2024-12-08T05:09:10.127Z] =================================================================================================================== 00:10:20.341 [2024-12-08T05:09:10.128Z] Total : 5018.14 19.60 0.00 0.00 0.00 0.00 0.00 00:10:20.342 00:10:21.280 [2024-12-08T05:09:11.066Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:21.280 Nvme0n1 : 8.00 5062.88 19.78 0.00 0.00 0.00 0.00 0.00 00:10:21.280 [2024-12-08T05:09:11.066Z] =================================================================================================================== 00:10:21.280 [2024-12-08T05:09:11.066Z] Total : 5062.88 19.78 0.00 0.00 0.00 0.00 0.00 00:10:21.280 00:10:22.214 [2024-12-08T05:09:12.000Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:22.214 Nvme0n1 : 9.00 5103.89 19.94 0.00 0.00 0.00 0.00 0.00 00:10:22.214 [2024-12-08T05:09:12.000Z] =================================================================================================================== 00:10:22.214 [2024-12-08T05:09:12.000Z] Total : 5103.89 19.94 0.00 0.00 0.00 0.00 0.00 00:10:22.214 00:10:23.146 [2024-12-08T05:09:12.932Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:23.146 Nvme0n1 : 10.00 5102.00 19.93 0.00 0.00 0.00 0.00 0.00 00:10:23.146 [2024-12-08T05:09:12.932Z] =================================================================================================================== 00:10:23.146 [2024-12-08T05:09:12.932Z] Total : 5102.00 19.93 0.00 0.00 0.00 0.00 0.00 00:10:23.146 00:10:23.146 00:10:23.146 Latency(us) 00:10:23.146 [2024-12-08T05:09:12.932Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:23.146 [2024-12-08T05:09:12.932Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:23.146 Nvme0n1 : 10.01 5111.85 19.97 0.00 0.00 25031.56 5153.51 165865.66 00:10:23.146 [2024-12-08T05:09:12.932Z] =================================================================================================================== 00:10:23.146 [2024-12-08T05:09:12.932Z] Total : 5111.85 19.97 0.00 0.00 25031.56 5153.51 165865.66 00:10:23.146 0 00:10:23.146 05:09:12 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 73214 00:10:23.146 05:09:12 -- common/autotest_common.sh@936 -- # '[' -z 73214 ']' 00:10:23.146 05:09:12 -- common/autotest_common.sh@940 -- # kill -0 73214 00:10:23.146 05:09:12 -- common/autotest_common.sh@941 -- # uname 00:10:23.146 05:09:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:23.146 05:09:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73214 00:10:23.146 killing process with pid 73214 00:10:23.146 Received shutdown signal, test time was about 10.000000 seconds 00:10:23.146 00:10:23.146 Latency(us) 00:10:23.146 [2024-12-08T05:09:12.932Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:23.146 [2024-12-08T05:09:12.932Z] =================================================================================================================== 00:10:23.146 [2024-12-08T05:09:12.932Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:23.146 05:09:12 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:23.146 05:09:12 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:23.146 05:09:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73214' 00:10:23.146 05:09:12 -- common/autotest_common.sh@955 -- # kill 73214 00:10:23.146 05:09:12 -- common/autotest_common.sh@960 -- # wait 73214 00:10:23.416 05:09:13 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:23.981 05:09:13 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bfa945c7-73d3-482e-acc9-c170d0018de1 00:10:23.981 05:09:13 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:10:24.239 05:09:13 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:10:24.239 05:09:13 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:10:24.239 05:09:13 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 72848 00:10:24.239 05:09:13 -- target/nvmf_lvs_grow.sh@74 -- # wait 72848 00:10:24.239 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 72848 Killed "${NVMF_APP[@]}" "$@" 00:10:24.239 05:09:13 -- target/nvmf_lvs_grow.sh@74 -- # true 00:10:24.239 05:09:13 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:10:24.239 05:09:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:24.239 05:09:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:24.239 05:09:13 -- common/autotest_common.sh@10 -- # set +x 00:10:24.239 05:09:13 -- nvmf/common.sh@469 -- # nvmfpid=73362 00:10:24.239 05:09:13 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:24.239 05:09:13 -- nvmf/common.sh@470 -- # waitforlisten 73362 00:10:24.239 05:09:13 -- common/autotest_common.sh@829 -- # '[' -z 73362 ']' 00:10:24.239 05:09:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:24.239 05:09:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:24.239 05:09:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.239 05:09:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:24.239 05:09:13 -- common/autotest_common.sh@10 -- # set +x 00:10:24.239 [2024-12-08 05:09:13.992832] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:24.239 [2024-12-08 05:09:13.992986] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:24.496 [2024-12-08 05:09:14.138004] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.496 [2024-12-08 05:09:14.181183] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:24.496 [2024-12-08 05:09:14.181395] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:24.496 [2024-12-08 05:09:14.181423] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:24.496 [2024-12-08 05:09:14.181439] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:24.496 [2024-12-08 05:09:14.181477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.753 05:09:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:24.753 05:09:14 -- common/autotest_common.sh@862 -- # return 0 00:10:24.753 05:09:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:24.753 05:09:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:24.753 05:09:14 -- common/autotest_common.sh@10 -- # set +x 00:10:24.753 05:09:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:24.753 05:09:14 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:25.011 [2024-12-08 05:09:14.788876] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:10:25.011 [2024-12-08 05:09:14.789245] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:10:25.011 [2024-12-08 05:09:14.789465] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:10:25.268 05:09:14 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:10:25.268 05:09:14 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 2427fe77-c4d4-44f8-babd-580a3715a7b2 00:10:25.268 05:09:14 -- common/autotest_common.sh@897 -- # local bdev_name=2427fe77-c4d4-44f8-babd-580a3715a7b2 00:10:25.268 05:09:14 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:25.268 05:09:14 -- common/autotest_common.sh@899 -- # local i 00:10:25.268 05:09:14 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:25.268 05:09:14 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:25.268 05:09:14 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:25.525 05:09:15 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2427fe77-c4d4-44f8-babd-580a3715a7b2 -t 2000 00:10:25.781 [ 00:10:25.781 { 00:10:25.781 "name": "2427fe77-c4d4-44f8-babd-580a3715a7b2", 00:10:25.781 "aliases": [ 00:10:25.781 "lvs/lvol" 00:10:25.781 ], 00:10:25.781 "product_name": "Logical Volume", 00:10:25.781 "block_size": 4096, 00:10:25.781 "num_blocks": 38912, 00:10:25.781 "uuid": "2427fe77-c4d4-44f8-babd-580a3715a7b2", 00:10:25.781 "assigned_rate_limits": { 00:10:25.781 "rw_ios_per_sec": 0, 00:10:25.781 "rw_mbytes_per_sec": 0, 00:10:25.781 "r_mbytes_per_sec": 0, 00:10:25.781 "w_mbytes_per_sec": 0 00:10:25.781 }, 00:10:25.781 "claimed": false, 00:10:25.781 "zoned": false, 00:10:25.781 "supported_io_types": { 00:10:25.781 "read": true, 00:10:25.781 "write": true, 00:10:25.781 "unmap": true, 00:10:25.781 "write_zeroes": true, 00:10:25.781 "flush": false, 00:10:25.781 "reset": true, 00:10:25.781 "compare": false, 00:10:25.782 "compare_and_write": false, 00:10:25.782 "abort": false, 00:10:25.782 "nvme_admin": false, 00:10:25.782 "nvme_io": false 00:10:25.782 }, 00:10:25.782 "driver_specific": { 00:10:25.782 "lvol": { 00:10:25.782 "lvol_store_uuid": "bfa945c7-73d3-482e-acc9-c170d0018de1", 00:10:25.782 "base_bdev": "aio_bdev", 00:10:25.782 "thin_provision": false, 00:10:25.782 "snapshot": false, 00:10:25.782 "clone": false, 00:10:25.782 "esnap_clone": false 00:10:25.782 } 00:10:25.782 } 00:10:25.782 } 00:10:25.782 ] 00:10:25.782 05:09:15 -- common/autotest_common.sh@905 -- # return 0 00:10:25.782 05:09:15 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bfa945c7-73d3-482e-acc9-c170d0018de1 00:10:25.782 05:09:15 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:10:26.346 05:09:15 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:10:26.346 05:09:15 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bfa945c7-73d3-482e-acc9-c170d0018de1 00:10:26.346 05:09:15 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:10:26.605 05:09:16 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:10:26.605 05:09:16 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:27.172 [2024-12-08 05:09:16.666112] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:27.172 05:09:16 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bfa945c7-73d3-482e-acc9-c170d0018de1 00:10:27.172 05:09:16 -- common/autotest_common.sh@650 -- # local es=0 00:10:27.172 05:09:16 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bfa945c7-73d3-482e-acc9-c170d0018de1 00:10:27.172 05:09:16 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:27.172 05:09:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:27.172 05:09:16 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:27.172 05:09:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:27.172 05:09:16 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:27.172 05:09:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:27.172 05:09:16 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:27.172 05:09:16 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:27.172 05:09:16 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bfa945c7-73d3-482e-acc9-c170d0018de1 00:10:27.430 request: 00:10:27.430 { 00:10:27.430 "uuid": "bfa945c7-73d3-482e-acc9-c170d0018de1", 00:10:27.430 "method": "bdev_lvol_get_lvstores", 00:10:27.430 "req_id": 1 00:10:27.430 } 00:10:27.430 Got JSON-RPC error response 00:10:27.430 response: 00:10:27.430 { 00:10:27.430 "code": -19, 00:10:27.430 "message": "No such device" 00:10:27.430 } 00:10:27.430 05:09:17 -- common/autotest_common.sh@653 -- # es=1 00:10:27.430 05:09:17 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:27.430 05:09:17 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:27.430 05:09:17 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:27.430 05:09:17 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:28.055 aio_bdev 00:10:28.056 05:09:17 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 2427fe77-c4d4-44f8-babd-580a3715a7b2 00:10:28.056 05:09:17 -- common/autotest_common.sh@897 -- # local bdev_name=2427fe77-c4d4-44f8-babd-580a3715a7b2 00:10:28.056 05:09:17 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:28.056 05:09:17 -- common/autotest_common.sh@899 -- # local i 00:10:28.056 05:09:17 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:28.056 05:09:17 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:28.056 05:09:17 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:28.633 05:09:18 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2427fe77-c4d4-44f8-babd-580a3715a7b2 -t 2000 00:10:28.891 [ 00:10:28.891 { 00:10:28.891 "name": "2427fe77-c4d4-44f8-babd-580a3715a7b2", 00:10:28.891 "aliases": [ 00:10:28.891 "lvs/lvol" 00:10:28.891 ], 00:10:28.891 "product_name": "Logical Volume", 00:10:28.891 "block_size": 4096, 00:10:28.891 "num_blocks": 38912, 00:10:28.891 "uuid": "2427fe77-c4d4-44f8-babd-580a3715a7b2", 00:10:28.891 "assigned_rate_limits": { 00:10:28.891 "rw_ios_per_sec": 0, 00:10:28.891 "rw_mbytes_per_sec": 0, 00:10:28.891 "r_mbytes_per_sec": 0, 00:10:28.891 "w_mbytes_per_sec": 0 00:10:28.891 }, 00:10:28.891 "claimed": false, 00:10:28.891 "zoned": false, 00:10:28.891 "supported_io_types": { 00:10:28.891 "read": true, 00:10:28.891 "write": true, 00:10:28.891 "unmap": true, 00:10:28.891 "write_zeroes": true, 00:10:28.891 "flush": false, 00:10:28.891 "reset": true, 00:10:28.891 "compare": false, 00:10:28.891 "compare_and_write": false, 00:10:28.891 "abort": false, 00:10:28.891 "nvme_admin": false, 00:10:28.891 "nvme_io": false 00:10:28.891 }, 00:10:28.891 "driver_specific": { 00:10:28.891 "lvol": { 00:10:28.891 "lvol_store_uuid": "bfa945c7-73d3-482e-acc9-c170d0018de1", 00:10:28.891 "base_bdev": "aio_bdev", 00:10:28.892 "thin_provision": false, 00:10:28.892 "snapshot": false, 00:10:28.892 "clone": false, 00:10:28.892 "esnap_clone": false 00:10:28.892 } 00:10:28.892 } 00:10:28.892 } 00:10:28.892 ] 00:10:28.892 05:09:18 -- common/autotest_common.sh@905 -- # return 0 00:10:28.892 05:09:18 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:10:28.892 05:09:18 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bfa945c7-73d3-482e-acc9-c170d0018de1 00:10:29.476 05:09:19 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:10:29.476 05:09:19 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bfa945c7-73d3-482e-acc9-c170d0018de1 00:10:29.476 05:09:19 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:10:30.043 05:09:19 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:10:30.043 05:09:19 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 2427fe77-c4d4-44f8-babd-580a3715a7b2 00:10:30.301 05:09:20 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bfa945c7-73d3-482e-acc9-c170d0018de1 00:10:30.880 05:09:20 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:31.443 05:09:20 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:32.010 ************************************ 00:10:32.010 END TEST lvs_grow_dirty 00:10:32.010 ************************************ 00:10:32.010 00:10:32.010 real 0m23.887s 00:10:32.010 user 0m46.801s 00:10:32.010 sys 0m8.215s 00:10:32.010 05:09:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:32.010 05:09:21 -- common/autotest_common.sh@10 -- # set +x 00:10:32.010 05:09:21 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:10:32.010 05:09:21 -- common/autotest_common.sh@806 -- # type=--id 00:10:32.010 05:09:21 -- common/autotest_common.sh@807 -- # id=0 00:10:32.010 05:09:21 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:10:32.010 05:09:21 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:32.010 05:09:21 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:10:32.010 05:09:21 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:10:32.010 05:09:21 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:10:32.010 05:09:21 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:32.010 nvmf_trace.0 00:10:32.273 05:09:21 -- common/autotest_common.sh@821 -- # return 0 00:10:32.273 05:09:21 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:32.273 05:09:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:32.273 05:09:21 -- nvmf/common.sh@116 -- # sync 00:10:32.530 05:09:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:32.530 05:09:22 -- nvmf/common.sh@119 -- # set +e 00:10:32.530 05:09:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:32.530 05:09:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:32.530 rmmod nvme_tcp 00:10:32.530 rmmod nvme_fabrics 00:10:32.530 rmmod nvme_keyring 00:10:32.788 05:09:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:32.788 05:09:22 -- nvmf/common.sh@123 -- # set -e 00:10:32.788 05:09:22 -- nvmf/common.sh@124 -- # return 0 00:10:32.788 05:09:22 -- nvmf/common.sh@477 -- # '[' -n 73362 ']' 00:10:32.788 05:09:22 -- nvmf/common.sh@478 -- # killprocess 73362 00:10:32.788 05:09:22 -- common/autotest_common.sh@936 -- # '[' -z 73362 ']' 00:10:32.788 05:09:22 -- common/autotest_common.sh@940 -- # kill -0 73362 00:10:32.788 05:09:22 -- common/autotest_common.sh@941 -- # uname 00:10:32.788 05:09:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:32.788 05:09:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73362 00:10:32.788 killing process with pid 73362 00:10:32.788 05:09:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:32.788 05:09:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:32.788 05:09:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73362' 00:10:32.788 05:09:22 -- common/autotest_common.sh@955 -- # kill 73362 00:10:32.788 05:09:22 -- common/autotest_common.sh@960 -- # wait 73362 00:10:33.045 05:09:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:33.045 05:09:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:33.045 05:09:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:33.045 05:09:22 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:33.045 05:09:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:33.045 05:09:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:33.045 05:09:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:33.045 05:09:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:33.045 05:09:22 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:33.045 ************************************ 00:10:33.045 END TEST nvmf_lvs_grow 00:10:33.045 ************************************ 00:10:33.045 00:10:33.045 real 0m45.464s 00:10:33.045 user 1m13.859s 00:10:33.045 sys 0m11.684s 00:10:33.045 05:09:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:33.045 05:09:22 -- common/autotest_common.sh@10 -- # set +x 00:10:33.045 05:09:22 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:33.045 05:09:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:33.045 05:09:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:33.045 05:09:22 -- common/autotest_common.sh@10 -- # set +x 00:10:33.303 ************************************ 00:10:33.303 START TEST nvmf_bdev_io_wait 00:10:33.303 ************************************ 00:10:33.303 05:09:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:33.303 * Looking for test storage... 00:10:33.303 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:33.303 05:09:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:33.303 05:09:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:33.303 05:09:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:33.562 05:09:23 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:33.562 05:09:23 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:33.562 05:09:23 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:33.562 05:09:23 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:33.562 05:09:23 -- scripts/common.sh@335 -- # IFS=.-: 00:10:33.562 05:09:23 -- scripts/common.sh@335 -- # read -ra ver1 00:10:33.562 05:09:23 -- scripts/common.sh@336 -- # IFS=.-: 00:10:33.562 05:09:23 -- scripts/common.sh@336 -- # read -ra ver2 00:10:33.562 05:09:23 -- scripts/common.sh@337 -- # local 'op=<' 00:10:33.562 05:09:23 -- scripts/common.sh@339 -- # ver1_l=2 00:10:33.562 05:09:23 -- scripts/common.sh@340 -- # ver2_l=1 00:10:33.562 05:09:23 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:33.562 05:09:23 -- scripts/common.sh@343 -- # case "$op" in 00:10:33.562 05:09:23 -- scripts/common.sh@344 -- # : 1 00:10:33.562 05:09:23 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:33.562 05:09:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:33.562 05:09:23 -- scripts/common.sh@364 -- # decimal 1 00:10:33.562 05:09:23 -- scripts/common.sh@352 -- # local d=1 00:10:33.562 05:09:23 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:33.562 05:09:23 -- scripts/common.sh@354 -- # echo 1 00:10:33.562 05:09:23 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:33.562 05:09:23 -- scripts/common.sh@365 -- # decimal 2 00:10:33.562 05:09:23 -- scripts/common.sh@352 -- # local d=2 00:10:33.562 05:09:23 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:33.562 05:09:23 -- scripts/common.sh@354 -- # echo 2 00:10:33.562 05:09:23 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:33.562 05:09:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:33.562 05:09:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:33.562 05:09:23 -- scripts/common.sh@367 -- # return 0 00:10:33.562 05:09:23 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:33.562 05:09:23 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:33.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.562 --rc genhtml_branch_coverage=1 00:10:33.562 --rc genhtml_function_coverage=1 00:10:33.562 --rc genhtml_legend=1 00:10:33.562 --rc geninfo_all_blocks=1 00:10:33.562 --rc geninfo_unexecuted_blocks=1 00:10:33.562 00:10:33.562 ' 00:10:33.562 05:09:23 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:33.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.562 --rc genhtml_branch_coverage=1 00:10:33.562 --rc genhtml_function_coverage=1 00:10:33.562 --rc genhtml_legend=1 00:10:33.562 --rc geninfo_all_blocks=1 00:10:33.562 --rc geninfo_unexecuted_blocks=1 00:10:33.562 00:10:33.562 ' 00:10:33.562 05:09:23 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:33.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.562 --rc genhtml_branch_coverage=1 00:10:33.562 --rc genhtml_function_coverage=1 00:10:33.562 --rc genhtml_legend=1 00:10:33.562 --rc geninfo_all_blocks=1 00:10:33.562 --rc geninfo_unexecuted_blocks=1 00:10:33.562 00:10:33.562 ' 00:10:33.562 05:09:23 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:33.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.562 --rc genhtml_branch_coverage=1 00:10:33.562 --rc genhtml_function_coverage=1 00:10:33.562 --rc genhtml_legend=1 00:10:33.562 --rc geninfo_all_blocks=1 00:10:33.562 --rc geninfo_unexecuted_blocks=1 00:10:33.562 00:10:33.562 ' 00:10:33.562 05:09:23 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:33.562 05:09:23 -- nvmf/common.sh@7 -- # uname -s 00:10:33.563 05:09:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:33.563 05:09:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:33.563 05:09:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:33.563 05:09:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:33.563 05:09:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:33.563 05:09:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:33.563 05:09:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:33.563 05:09:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:33.563 05:09:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:33.563 05:09:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:33.563 05:09:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:10:33.563 05:09:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:10:33.563 05:09:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:33.563 05:09:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:33.563 05:09:23 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:33.563 05:09:23 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:33.563 05:09:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:33.563 05:09:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:33.563 05:09:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:33.563 05:09:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.563 05:09:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.563 05:09:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.563 05:09:23 -- paths/export.sh@5 -- # export PATH 00:10:33.563 05:09:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.563 05:09:23 -- nvmf/common.sh@46 -- # : 0 00:10:33.563 05:09:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:33.563 05:09:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:33.563 05:09:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:33.563 05:09:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:33.563 05:09:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:33.563 05:09:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:33.563 05:09:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:33.563 05:09:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:33.563 05:09:23 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:33.563 05:09:23 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:33.563 05:09:23 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:33.563 05:09:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:33.563 05:09:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:33.563 05:09:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:33.563 05:09:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:33.563 05:09:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:33.563 05:09:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:33.563 05:09:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:33.563 05:09:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:33.563 05:09:23 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:33.563 05:09:23 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:33.563 05:09:23 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:33.563 05:09:23 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:33.563 05:09:23 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:33.563 05:09:23 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:33.563 05:09:23 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:33.563 05:09:23 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:33.563 05:09:23 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:33.563 05:09:23 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:33.563 05:09:23 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:33.563 05:09:23 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:33.563 05:09:23 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:33.563 05:09:23 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:33.563 05:09:23 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:33.563 05:09:23 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:33.563 05:09:23 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:33.563 05:09:23 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:33.563 05:09:23 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:33.563 05:09:23 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:33.563 Cannot find device "nvmf_tgt_br" 00:10:33.563 05:09:23 -- nvmf/common.sh@154 -- # true 00:10:33.563 05:09:23 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:33.563 Cannot find device "nvmf_tgt_br2" 00:10:33.563 05:09:23 -- nvmf/common.sh@155 -- # true 00:10:33.563 05:09:23 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:33.563 05:09:23 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:33.563 Cannot find device "nvmf_tgt_br" 00:10:33.563 05:09:23 -- nvmf/common.sh@157 -- # true 00:10:33.563 05:09:23 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:33.563 Cannot find device "nvmf_tgt_br2" 00:10:33.563 05:09:23 -- nvmf/common.sh@158 -- # true 00:10:33.563 05:09:23 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:33.834 05:09:23 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:33.834 05:09:23 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:33.834 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:33.834 05:09:23 -- nvmf/common.sh@161 -- # true 00:10:33.834 05:09:23 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:33.834 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:33.834 05:09:23 -- nvmf/common.sh@162 -- # true 00:10:33.834 05:09:23 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:33.834 05:09:23 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:33.834 05:09:23 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:33.834 05:09:23 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:34.097 05:09:23 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:34.098 05:09:23 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:34.098 05:09:23 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:34.098 05:09:23 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:34.098 05:09:23 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:34.098 05:09:23 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:34.098 05:09:23 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:34.098 05:09:23 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:34.098 05:09:23 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:34.098 05:09:23 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:34.098 05:09:23 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:34.098 05:09:23 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:34.098 05:09:23 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:34.098 05:09:23 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:34.098 05:09:23 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:34.098 05:09:23 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:34.098 05:09:23 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:34.356 05:09:23 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:34.356 05:09:23 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:34.356 05:09:23 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:34.356 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:34.356 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:10:34.356 00:10:34.356 --- 10.0.0.2 ping statistics --- 00:10:34.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:34.356 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:10:34.356 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:34.356 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:10:34.356 00:10:34.356 --- 10.0.0.3 ping statistics --- 00:10:34.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:34.356 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:10:34.356 05:09:23 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:34.356 05:09:23 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:34.356 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:34.356 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 00:10:34.356 00:10:34.356 --- 10.0.0.1 ping statistics --- 00:10:34.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:34.356 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:10:34.356 05:09:23 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:34.356 05:09:23 -- nvmf/common.sh@421 -- # return 0 00:10:34.356 05:09:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:34.356 05:09:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:34.356 05:09:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:34.356 05:09:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:34.356 05:09:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:34.356 05:09:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:34.356 05:09:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:34.356 05:09:24 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:34.356 05:09:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:34.356 05:09:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:34.356 05:09:24 -- common/autotest_common.sh@10 -- # set +x 00:10:34.356 05:09:24 -- nvmf/common.sh@469 -- # nvmfpid=73717 00:10:34.356 05:09:24 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:34.356 05:09:24 -- nvmf/common.sh@470 -- # waitforlisten 73717 00:10:34.356 05:09:24 -- common/autotest_common.sh@829 -- # '[' -z 73717 ']' 00:10:34.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:34.356 05:09:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:34.356 05:09:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:34.357 05:09:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:34.357 05:09:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:34.357 05:09:24 -- common/autotest_common.sh@10 -- # set +x 00:10:34.357 [2024-12-08 05:09:24.140052] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:34.614 [2024-12-08 05:09:24.140162] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:34.614 [2024-12-08 05:09:24.307318] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:34.614 [2024-12-08 05:09:24.355995] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:34.614 [2024-12-08 05:09:24.356197] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:34.614 [2024-12-08 05:09:24.356224] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:34.614 [2024-12-08 05:09:24.356242] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:34.614 [2024-12-08 05:09:24.356797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:34.614 [2024-12-08 05:09:24.356902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:34.614 [2024-12-08 05:09:24.365875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:34.614 [2024-12-08 05:09:24.366382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.872 05:09:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:34.872 05:09:24 -- common/autotest_common.sh@862 -- # return 0 00:10:34.872 05:09:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:34.872 05:09:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:34.872 05:09:24 -- common/autotest_common.sh@10 -- # set +x 00:10:35.130 05:09:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:35.130 05:09:24 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:35.130 05:09:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.130 05:09:24 -- common/autotest_common.sh@10 -- # set +x 00:10:35.130 05:09:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.130 05:09:24 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:35.130 05:09:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.130 05:09:24 -- common/autotest_common.sh@10 -- # set +x 00:10:35.130 05:09:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.130 05:09:24 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:35.130 05:09:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.130 05:09:24 -- common/autotest_common.sh@10 -- # set +x 00:10:35.130 [2024-12-08 05:09:24.733207] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:35.130 05:09:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.130 05:09:24 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:35.130 05:09:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.130 05:09:24 -- common/autotest_common.sh@10 -- # set +x 00:10:35.130 Malloc0 00:10:35.130 05:09:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.130 05:09:24 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:35.130 05:09:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.130 05:09:24 -- common/autotest_common.sh@10 -- # set +x 00:10:35.130 05:09:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.130 05:09:24 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:35.130 05:09:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.130 05:09:24 -- common/autotest_common.sh@10 -- # set +x 00:10:35.130 05:09:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.130 05:09:24 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:35.130 05:09:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.130 05:09:24 -- common/autotest_common.sh@10 -- # set +x 00:10:35.130 [2024-12-08 05:09:24.798643] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:35.130 05:09:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.130 05:09:24 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=73750 00:10:35.130 05:09:24 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:35.130 05:09:24 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:35.130 05:09:24 -- target/bdev_io_wait.sh@30 -- # READ_PID=73752 00:10:35.130 05:09:24 -- nvmf/common.sh@520 -- # config=() 00:10:35.130 05:09:24 -- nvmf/common.sh@520 -- # local subsystem config 00:10:35.130 05:09:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:35.130 05:09:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:35.130 { 00:10:35.130 "params": { 00:10:35.130 "name": "Nvme$subsystem", 00:10:35.130 "trtype": "$TEST_TRANSPORT", 00:10:35.130 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:35.130 "adrfam": "ipv4", 00:10:35.130 "trsvcid": "$NVMF_PORT", 00:10:35.130 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:35.130 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:35.130 "hdgst": ${hdgst:-false}, 00:10:35.130 "ddgst": ${ddgst:-false} 00:10:35.130 }, 00:10:35.130 "method": "bdev_nvme_attach_controller" 00:10:35.130 } 00:10:35.130 EOF 00:10:35.130 )") 00:10:35.130 05:09:24 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:35.130 05:09:24 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=73754 00:10:35.130 05:09:24 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:35.130 05:09:24 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=73757 00:10:35.130 05:09:24 -- nvmf/common.sh@542 -- # cat 00:10:35.130 05:09:24 -- target/bdev_io_wait.sh@35 -- # sync 00:10:35.130 05:09:24 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:35.130 05:09:24 -- nvmf/common.sh@520 -- # config=() 00:10:35.130 05:09:24 -- nvmf/common.sh@520 -- # local subsystem config 00:10:35.130 05:09:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:35.130 05:09:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:35.130 { 00:10:35.130 "params": { 00:10:35.130 "name": "Nvme$subsystem", 00:10:35.130 "trtype": "$TEST_TRANSPORT", 00:10:35.130 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:35.130 "adrfam": "ipv4", 00:10:35.130 "trsvcid": "$NVMF_PORT", 00:10:35.130 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:35.130 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:35.130 "hdgst": ${hdgst:-false}, 00:10:35.130 "ddgst": ${ddgst:-false} 00:10:35.130 }, 00:10:35.130 "method": "bdev_nvme_attach_controller" 00:10:35.130 } 00:10:35.130 EOF 00:10:35.130 )") 00:10:35.130 05:09:24 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:35.130 05:09:24 -- nvmf/common.sh@520 -- # config=() 00:10:35.130 05:09:24 -- nvmf/common.sh@520 -- # local subsystem config 00:10:35.130 05:09:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:35.130 05:09:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:35.130 { 00:10:35.130 "params": { 00:10:35.130 "name": "Nvme$subsystem", 00:10:35.130 "trtype": "$TEST_TRANSPORT", 00:10:35.131 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:35.131 "adrfam": "ipv4", 00:10:35.131 "trsvcid": "$NVMF_PORT", 00:10:35.131 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:35.131 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:35.131 "hdgst": ${hdgst:-false}, 00:10:35.131 "ddgst": ${ddgst:-false} 00:10:35.131 }, 00:10:35.131 "method": "bdev_nvme_attach_controller" 00:10:35.131 } 00:10:35.131 EOF 00:10:35.131 )") 00:10:35.131 05:09:24 -- nvmf/common.sh@542 -- # cat 00:10:35.131 05:09:24 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:35.131 05:09:24 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:35.131 05:09:24 -- nvmf/common.sh@520 -- # config=() 00:10:35.131 05:09:24 -- nvmf/common.sh@520 -- # local subsystem config 00:10:35.131 05:09:24 -- nvmf/common.sh@542 -- # cat 00:10:35.131 05:09:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:35.131 05:09:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:35.131 { 00:10:35.131 "params": { 00:10:35.131 "name": "Nvme$subsystem", 00:10:35.131 "trtype": "$TEST_TRANSPORT", 00:10:35.131 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:35.131 "adrfam": "ipv4", 00:10:35.131 "trsvcid": "$NVMF_PORT", 00:10:35.131 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:35.131 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:35.131 "hdgst": ${hdgst:-false}, 00:10:35.131 "ddgst": ${ddgst:-false} 00:10:35.131 }, 00:10:35.131 "method": "bdev_nvme_attach_controller" 00:10:35.131 } 00:10:35.131 EOF 00:10:35.131 )") 00:10:35.131 05:09:24 -- nvmf/common.sh@544 -- # jq . 00:10:35.131 05:09:24 -- nvmf/common.sh@544 -- # jq . 00:10:35.131 05:09:24 -- nvmf/common.sh@542 -- # cat 00:10:35.131 05:09:24 -- nvmf/common.sh@545 -- # IFS=, 00:10:35.131 05:09:24 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:35.131 "params": { 00:10:35.131 "name": "Nvme1", 00:10:35.131 "trtype": "tcp", 00:10:35.131 "traddr": "10.0.0.2", 00:10:35.131 "adrfam": "ipv4", 00:10:35.131 "trsvcid": "4420", 00:10:35.131 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:35.131 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:35.131 "hdgst": false, 00:10:35.131 "ddgst": false 00:10:35.131 }, 00:10:35.131 "method": "bdev_nvme_attach_controller" 00:10:35.131 }' 00:10:35.131 05:09:24 -- nvmf/common.sh@544 -- # jq . 00:10:35.131 05:09:24 -- nvmf/common.sh@545 -- # IFS=, 00:10:35.131 05:09:24 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:35.131 "params": { 00:10:35.131 "name": "Nvme1", 00:10:35.131 "trtype": "tcp", 00:10:35.131 "traddr": "10.0.0.2", 00:10:35.131 "adrfam": "ipv4", 00:10:35.131 "trsvcid": "4420", 00:10:35.131 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:35.131 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:35.131 "hdgst": false, 00:10:35.131 "ddgst": false 00:10:35.131 }, 00:10:35.131 "method": "bdev_nvme_attach_controller" 00:10:35.131 }' 00:10:35.131 05:09:24 -- nvmf/common.sh@545 -- # IFS=, 00:10:35.131 05:09:24 -- nvmf/common.sh@544 -- # jq . 00:10:35.131 05:09:24 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:35.131 "params": { 00:10:35.131 "name": "Nvme1", 00:10:35.131 "trtype": "tcp", 00:10:35.131 "traddr": "10.0.0.2", 00:10:35.131 "adrfam": "ipv4", 00:10:35.131 "trsvcid": "4420", 00:10:35.131 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:35.131 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:35.131 "hdgst": false, 00:10:35.131 "ddgst": false 00:10:35.131 }, 00:10:35.131 "method": "bdev_nvme_attach_controller" 00:10:35.131 }' 00:10:35.131 05:09:24 -- nvmf/common.sh@545 -- # IFS=, 00:10:35.131 05:09:24 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:35.131 "params": { 00:10:35.131 "name": "Nvme1", 00:10:35.131 "trtype": "tcp", 00:10:35.131 "traddr": "10.0.0.2", 00:10:35.131 "adrfam": "ipv4", 00:10:35.131 "trsvcid": "4420", 00:10:35.131 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:35.131 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:35.131 "hdgst": false, 00:10:35.131 "ddgst": false 00:10:35.131 }, 00:10:35.131 "method": "bdev_nvme_attach_controller" 00:10:35.131 }' 00:10:35.131 [2024-12-08 05:09:24.885264] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:35.131 [2024-12-08 05:09:24.885387] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:35.131 [2024-12-08 05:09:24.913293] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:35.389 [2024-12-08 05:09:24.914193] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:35.389 [2024-12-08 05:09:24.923374] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:35.389 [2024-12-08 05:09:24.923498] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:10:35.389 [2024-12-08 05:09:24.926265] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:35.389 [2024-12-08 05:09:24.926386] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:35.389 05:09:24 -- target/bdev_io_wait.sh@37 -- # wait 73750 00:10:35.389 [2024-12-08 05:09:25.079130] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.389 [2024-12-08 05:09:25.105163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:10:35.389 [2024-12-08 05:09:25.120030] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.389 [2024-12-08 05:09:25.146047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:10:35.389 [2024-12-08 05:09:25.172052] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.647 [2024-12-08 05:09:25.214462] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.647 [2024-12-08 05:09:25.231037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:10:35.647 [2024-12-08 05:09:25.238974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:10:35.647 Running I/O for 1 seconds... 00:10:35.647 Running I/O for 1 seconds... 00:10:35.647 Running I/O for 1 seconds... 00:10:35.647 Running I/O for 1 seconds... 00:10:36.579 00:10:36.579 Latency(us) 00:10:36.579 [2024-12-08T05:09:26.365Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:36.579 [2024-12-08T05:09:26.365Z] Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:36.579 Nvme1n1 : 1.01 8297.74 32.41 0.00 0.00 15350.23 7357.91 24188.74 00:10:36.579 [2024-12-08T05:09:26.365Z] =================================================================================================================== 00:10:36.579 [2024-12-08T05:09:26.365Z] Total : 8297.74 32.41 0.00 0.00 15350.23 7357.91 24188.74 00:10:36.579 00:10:36.579 Latency(us) 00:10:36.579 [2024-12-08T05:09:26.365Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:36.579 [2024-12-08T05:09:26.365Z] Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:36.579 Nvme1n1 : 1.02 5554.90 21.70 0.00 0.00 22837.64 8757.99 42657.98 00:10:36.579 [2024-12-08T05:09:26.365Z] =================================================================================================================== 00:10:36.579 [2024-12-08T05:09:26.365Z] Total : 5554.90 21.70 0.00 0.00 22837.64 8757.99 42657.98 00:10:36.836 00:10:36.836 Latency(us) 00:10:36.836 [2024-12-08T05:09:26.622Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:36.836 [2024-12-08T05:09:26.622Z] Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:36.836 Nvme1n1 : 1.02 6463.77 25.25 0.00 0.00 19709.61 4587.52 39083.29 00:10:36.836 [2024-12-08T05:09:26.622Z] =================================================================================================================== 00:10:36.836 [2024-12-08T05:09:26.622Z] Total : 6463.77 25.25 0.00 0.00 19709.61 4587.52 39083.29 00:10:36.836 00:10:36.836 Latency(us) 00:10:36.836 [2024-12-08T05:09:26.622Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:36.836 [2024-12-08T05:09:26.622Z] Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:36.836 Nvme1n1 : 1.00 120121.54 469.22 0.00 0.00 1061.90 431.94 24784.52 00:10:36.836 [2024-12-08T05:09:26.622Z] =================================================================================================================== 00:10:36.836 [2024-12-08T05:09:26.622Z] Total : 120121.54 469.22 0.00 0.00 1061.90 431.94 24784.52 00:10:37.094 05:09:26 -- target/bdev_io_wait.sh@38 -- # wait 73752 00:10:37.094 05:09:26 -- target/bdev_io_wait.sh@39 -- # wait 73754 00:10:37.094 05:09:26 -- target/bdev_io_wait.sh@40 -- # wait 73757 00:10:37.094 05:09:26 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:37.094 05:09:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.094 05:09:26 -- common/autotest_common.sh@10 -- # set +x 00:10:37.094 05:09:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.094 05:09:26 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:37.094 05:09:26 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:37.094 05:09:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:37.094 05:09:26 -- nvmf/common.sh@116 -- # sync 00:10:37.352 05:09:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:37.352 05:09:26 -- nvmf/common.sh@119 -- # set +e 00:10:37.352 05:09:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:37.352 05:09:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:37.352 rmmod nvme_tcp 00:10:37.352 rmmod nvme_fabrics 00:10:37.352 rmmod nvme_keyring 00:10:37.610 05:09:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:37.610 05:09:27 -- nvmf/common.sh@123 -- # set -e 00:10:37.610 05:09:27 -- nvmf/common.sh@124 -- # return 0 00:10:37.610 05:09:27 -- nvmf/common.sh@477 -- # '[' -n 73717 ']' 00:10:37.610 05:09:27 -- nvmf/common.sh@478 -- # killprocess 73717 00:10:37.610 05:09:27 -- common/autotest_common.sh@936 -- # '[' -z 73717 ']' 00:10:37.610 05:09:27 -- common/autotest_common.sh@940 -- # kill -0 73717 00:10:37.610 05:09:27 -- common/autotest_common.sh@941 -- # uname 00:10:37.610 05:09:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:37.610 05:09:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73717 00:10:37.610 killing process with pid 73717 00:10:37.610 05:09:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:37.610 05:09:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:37.610 05:09:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73717' 00:10:37.610 05:09:27 -- common/autotest_common.sh@955 -- # kill 73717 00:10:37.610 05:09:27 -- common/autotest_common.sh@960 -- # wait 73717 00:10:37.898 05:09:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:37.898 05:09:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:37.898 05:09:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:37.898 05:09:27 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:37.898 05:09:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:37.898 05:09:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.898 05:09:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:37.898 05:09:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.898 05:09:27 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:37.898 00:10:37.898 real 0m4.755s 00:10:37.898 user 0m14.912s 00:10:37.898 sys 0m2.416s 00:10:37.898 05:09:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:37.898 ************************************ 00:10:37.898 END TEST nvmf_bdev_io_wait 00:10:37.898 ************************************ 00:10:37.898 05:09:27 -- common/autotest_common.sh@10 -- # set +x 00:10:38.157 05:09:27 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:38.157 05:09:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:38.157 05:09:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:38.157 05:09:27 -- common/autotest_common.sh@10 -- # set +x 00:10:38.157 ************************************ 00:10:38.157 START TEST nvmf_queue_depth 00:10:38.157 ************************************ 00:10:38.157 05:09:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:38.445 * Looking for test storage... 00:10:38.445 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:38.445 05:09:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:38.445 05:09:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:38.445 05:09:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:38.710 05:09:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:38.710 05:09:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:38.710 05:09:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:38.710 05:09:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:38.710 05:09:28 -- scripts/common.sh@335 -- # IFS=.-: 00:10:38.710 05:09:28 -- scripts/common.sh@335 -- # read -ra ver1 00:10:38.710 05:09:28 -- scripts/common.sh@336 -- # IFS=.-: 00:10:38.710 05:09:28 -- scripts/common.sh@336 -- # read -ra ver2 00:10:38.710 05:09:28 -- scripts/common.sh@337 -- # local 'op=<' 00:10:38.710 05:09:28 -- scripts/common.sh@339 -- # ver1_l=2 00:10:38.710 05:09:28 -- scripts/common.sh@340 -- # ver2_l=1 00:10:38.710 05:09:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:38.710 05:09:28 -- scripts/common.sh@343 -- # case "$op" in 00:10:38.710 05:09:28 -- scripts/common.sh@344 -- # : 1 00:10:38.710 05:09:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:38.710 05:09:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:38.710 05:09:28 -- scripts/common.sh@364 -- # decimal 1 00:10:38.710 05:09:28 -- scripts/common.sh@352 -- # local d=1 00:10:38.710 05:09:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:38.710 05:09:28 -- scripts/common.sh@354 -- # echo 1 00:10:38.710 05:09:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:38.710 05:09:28 -- scripts/common.sh@365 -- # decimal 2 00:10:38.710 05:09:28 -- scripts/common.sh@352 -- # local d=2 00:10:38.710 05:09:28 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:38.710 05:09:28 -- scripts/common.sh@354 -- # echo 2 00:10:38.710 05:09:28 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:38.710 05:09:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:38.710 05:09:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:38.710 05:09:28 -- scripts/common.sh@367 -- # return 0 00:10:38.710 05:09:28 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:38.710 05:09:28 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:38.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.710 --rc genhtml_branch_coverage=1 00:10:38.710 --rc genhtml_function_coverage=1 00:10:38.710 --rc genhtml_legend=1 00:10:38.710 --rc geninfo_all_blocks=1 00:10:38.710 --rc geninfo_unexecuted_blocks=1 00:10:38.710 00:10:38.710 ' 00:10:38.710 05:09:28 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:38.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.710 --rc genhtml_branch_coverage=1 00:10:38.710 --rc genhtml_function_coverage=1 00:10:38.710 --rc genhtml_legend=1 00:10:38.710 --rc geninfo_all_blocks=1 00:10:38.710 --rc geninfo_unexecuted_blocks=1 00:10:38.710 00:10:38.710 ' 00:10:38.710 05:09:28 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:38.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.710 --rc genhtml_branch_coverage=1 00:10:38.710 --rc genhtml_function_coverage=1 00:10:38.710 --rc genhtml_legend=1 00:10:38.710 --rc geninfo_all_blocks=1 00:10:38.710 --rc geninfo_unexecuted_blocks=1 00:10:38.710 00:10:38.710 ' 00:10:38.710 05:09:28 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:38.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.710 --rc genhtml_branch_coverage=1 00:10:38.710 --rc genhtml_function_coverage=1 00:10:38.710 --rc genhtml_legend=1 00:10:38.710 --rc geninfo_all_blocks=1 00:10:38.710 --rc geninfo_unexecuted_blocks=1 00:10:38.710 00:10:38.710 ' 00:10:38.710 05:09:28 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:38.710 05:09:28 -- nvmf/common.sh@7 -- # uname -s 00:10:38.710 05:09:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:38.710 05:09:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:38.710 05:09:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:38.710 05:09:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:38.710 05:09:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:38.710 05:09:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:38.710 05:09:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:38.710 05:09:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:38.710 05:09:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:38.710 05:09:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:38.710 05:09:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:10:38.710 05:09:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:10:38.711 05:09:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:38.711 05:09:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:38.711 05:09:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:38.711 05:09:28 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:38.711 05:09:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:38.711 05:09:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:38.711 05:09:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:38.711 05:09:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.711 05:09:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.711 05:09:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.711 05:09:28 -- paths/export.sh@5 -- # export PATH 00:10:38.711 05:09:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.711 05:09:28 -- nvmf/common.sh@46 -- # : 0 00:10:38.711 05:09:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:38.711 05:09:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:38.711 05:09:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:38.711 05:09:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:38.711 05:09:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:38.711 05:09:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:38.711 05:09:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:38.711 05:09:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:38.711 05:09:28 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:38.711 05:09:28 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:38.711 05:09:28 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:38.711 05:09:28 -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:38.711 05:09:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:38.711 05:09:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:38.711 05:09:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:38.711 05:09:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:38.711 05:09:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:38.711 05:09:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:38.711 05:09:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:38.711 05:09:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:38.711 05:09:28 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:38.711 05:09:28 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:38.711 05:09:28 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:38.711 05:09:28 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:38.711 05:09:28 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:38.711 05:09:28 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:38.711 05:09:28 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:38.711 05:09:28 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:38.711 05:09:28 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:38.711 05:09:28 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:38.711 05:09:28 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:38.711 05:09:28 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:38.711 05:09:28 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:38.711 05:09:28 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:38.711 05:09:28 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:38.711 05:09:28 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:38.711 05:09:28 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:38.711 05:09:28 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:38.711 05:09:28 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:38.711 05:09:28 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:38.711 Cannot find device "nvmf_tgt_br" 00:10:38.711 05:09:28 -- nvmf/common.sh@154 -- # true 00:10:38.711 05:09:28 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:38.711 Cannot find device "nvmf_tgt_br2" 00:10:38.711 05:09:28 -- nvmf/common.sh@155 -- # true 00:10:38.711 05:09:28 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:38.711 05:09:28 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:38.711 Cannot find device "nvmf_tgt_br" 00:10:38.711 05:09:28 -- nvmf/common.sh@157 -- # true 00:10:38.711 05:09:28 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:38.711 Cannot find device "nvmf_tgt_br2" 00:10:38.711 05:09:28 -- nvmf/common.sh@158 -- # true 00:10:38.711 05:09:28 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:38.969 05:09:28 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:38.969 05:09:28 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:38.969 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:38.969 05:09:28 -- nvmf/common.sh@161 -- # true 00:10:38.969 05:09:28 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:38.969 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:38.969 05:09:28 -- nvmf/common.sh@162 -- # true 00:10:38.969 05:09:28 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:38.969 05:09:28 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:38.969 05:09:28 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:39.226 05:09:28 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:39.226 05:09:29 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:39.484 05:09:29 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:39.484 05:09:29 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:39.484 05:09:29 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:39.484 05:09:29 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:39.484 05:09:29 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:39.484 05:09:29 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:39.484 05:09:29 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:39.484 05:09:29 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:39.484 05:09:29 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:39.484 05:09:29 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:39.484 05:09:29 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:39.484 05:09:29 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:39.484 05:09:29 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:39.484 05:09:29 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:39.812 05:09:29 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:39.812 05:09:29 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:39.812 05:09:29 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:39.812 05:09:29 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:39.812 05:09:29 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:39.812 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:39.812 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:10:39.812 00:10:39.812 --- 10.0.0.2 ping statistics --- 00:10:39.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.812 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:10:39.812 05:09:29 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:39.812 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:39.812 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:10:39.812 00:10:39.812 --- 10.0.0.3 ping statistics --- 00:10:39.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.812 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:10:39.812 05:09:29 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:39.812 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:39.812 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:10:39.812 00:10:39.812 --- 10.0.0.1 ping statistics --- 00:10:39.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.812 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:10:39.812 05:09:29 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:39.812 05:09:29 -- nvmf/common.sh@421 -- # return 0 00:10:39.812 05:09:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:39.812 05:09:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:39.812 05:09:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:39.812 05:09:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:39.812 05:09:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:39.812 05:09:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:39.812 05:09:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:39.812 05:09:29 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:39.812 05:09:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:39.812 05:09:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:39.812 05:09:29 -- common/autotest_common.sh@10 -- # set +x 00:10:39.812 05:09:29 -- nvmf/common.sh@469 -- # nvmfpid=73990 00:10:39.812 05:09:29 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:39.812 05:09:29 -- nvmf/common.sh@470 -- # waitforlisten 73990 00:10:39.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:39.812 05:09:29 -- common/autotest_common.sh@829 -- # '[' -z 73990 ']' 00:10:39.812 05:09:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:39.812 05:09:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:39.812 05:09:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:39.812 05:09:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:39.812 05:09:29 -- common/autotest_common.sh@10 -- # set +x 00:10:40.073 [2024-12-08 05:09:29.616532] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:40.073 [2024-12-08 05:09:29.623914] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:40.073 [2024-12-08 05:09:29.803976] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.332 [2024-12-08 05:09:29.860789] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:40.332 [2024-12-08 05:09:29.862987] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:40.332 [2024-12-08 05:09:29.863017] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:40.332 [2024-12-08 05:09:29.863031] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:40.332 [2024-12-08 05:09:29.863087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:40.332 05:09:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:40.332 05:09:30 -- common/autotest_common.sh@862 -- # return 0 00:10:40.332 05:09:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:40.332 05:09:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:40.332 05:09:30 -- common/autotest_common.sh@10 -- # set +x 00:10:40.591 05:09:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:40.591 05:09:30 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:40.591 05:09:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.591 05:09:30 -- common/autotest_common.sh@10 -- # set +x 00:10:40.591 [2024-12-08 05:09:30.256411] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:40.591 05:09:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.591 05:09:30 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:40.591 05:09:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.591 05:09:30 -- common/autotest_common.sh@10 -- # set +x 00:10:40.591 Malloc0 00:10:40.591 05:09:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.591 05:09:30 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:40.591 05:09:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.591 05:09:30 -- common/autotest_common.sh@10 -- # set +x 00:10:40.591 05:09:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.591 05:09:30 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:40.591 05:09:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.591 05:09:30 -- common/autotest_common.sh@10 -- # set +x 00:10:40.591 05:09:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.591 05:09:30 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:40.591 05:09:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.591 05:09:30 -- common/autotest_common.sh@10 -- # set +x 00:10:40.591 [2024-12-08 05:09:30.360216] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:40.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:40.591 05:09:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.591 05:09:30 -- target/queue_depth.sh@30 -- # bdevperf_pid=74020 00:10:40.591 05:09:30 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:40.591 05:09:30 -- target/queue_depth.sh@33 -- # waitforlisten 74020 /var/tmp/bdevperf.sock 00:10:40.591 05:09:30 -- common/autotest_common.sh@829 -- # '[' -z 74020 ']' 00:10:40.591 05:09:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:40.591 05:09:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:40.591 05:09:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:40.591 05:09:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:40.591 05:09:30 -- common/autotest_common.sh@10 -- # set +x 00:10:40.591 05:09:30 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:40.850 [2024-12-08 05:09:30.475431] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:40.850 [2024-12-08 05:09:30.475569] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74020 ] 00:10:41.108 [2024-12-08 05:09:30.676129] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.108 [2024-12-08 05:09:30.733151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.366 05:09:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:41.366 05:09:31 -- common/autotest_common.sh@862 -- # return 0 00:10:41.366 05:09:31 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:41.366 05:09:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.366 05:09:31 -- common/autotest_common.sh@10 -- # set +x 00:10:41.366 NVMe0n1 00:10:41.366 05:09:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.367 05:09:31 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:41.625 Running I/O for 10 seconds... 00:10:51.762 00:10:51.762 Latency(us) 00:10:51.762 [2024-12-08T05:09:41.548Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:51.762 [2024-12-08T05:09:41.548Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:51.762 Verification LBA range: start 0x0 length 0x4000 00:10:51.762 NVMe0n1 : 10.09 10458.72 40.85 0.00 0.00 97391.02 19303.33 127735.62 00:10:51.762 [2024-12-08T05:09:41.548Z] =================================================================================================================== 00:10:51.762 [2024-12-08T05:09:41.548Z] Total : 10458.72 40.85 0.00 0.00 97391.02 19303.33 127735.62 00:10:51.762 0 00:10:51.762 05:09:41 -- target/queue_depth.sh@39 -- # killprocess 74020 00:10:51.762 05:09:41 -- common/autotest_common.sh@936 -- # '[' -z 74020 ']' 00:10:51.762 05:09:41 -- common/autotest_common.sh@940 -- # kill -0 74020 00:10:51.762 05:09:41 -- common/autotest_common.sh@941 -- # uname 00:10:51.762 05:09:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:51.762 05:09:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74020 00:10:51.762 killing process with pid 74020 00:10:51.762 Received shutdown signal, test time was about 10.000000 seconds 00:10:51.762 00:10:51.762 Latency(us) 00:10:51.762 [2024-12-08T05:09:41.548Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:51.762 [2024-12-08T05:09:41.548Z] =================================================================================================================== 00:10:51.762 [2024-12-08T05:09:41.548Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:51.762 05:09:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:51.762 05:09:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:51.762 05:09:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74020' 00:10:51.762 05:09:41 -- common/autotest_common.sh@955 -- # kill 74020 00:10:51.762 05:09:41 -- common/autotest_common.sh@960 -- # wait 74020 00:10:52.026 05:09:41 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:52.026 05:09:41 -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:52.026 05:09:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:52.026 05:09:41 -- nvmf/common.sh@116 -- # sync 00:10:52.283 05:09:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:52.283 rmmod nvme_tcp 00:10:52.283 05:09:42 -- nvmf/common.sh@119 -- # set +e 00:10:52.283 05:09:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:52.283 05:09:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:52.542 rmmod nvme_fabrics 00:10:52.542 rmmod nvme_keyring 00:10:52.542 05:09:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:52.542 05:09:42 -- nvmf/common.sh@123 -- # set -e 00:10:52.542 05:09:42 -- nvmf/common.sh@124 -- # return 0 00:10:52.542 05:09:42 -- nvmf/common.sh@477 -- # '[' -n 73990 ']' 00:10:52.542 05:09:42 -- nvmf/common.sh@478 -- # killprocess 73990 00:10:52.542 05:09:42 -- common/autotest_common.sh@936 -- # '[' -z 73990 ']' 00:10:52.542 05:09:42 -- common/autotest_common.sh@940 -- # kill -0 73990 00:10:52.542 05:09:42 -- common/autotest_common.sh@941 -- # uname 00:10:52.542 05:09:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:52.542 05:09:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73990 00:10:52.542 05:09:42 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:52.542 killing process with pid 73990 00:10:52.542 05:09:42 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:52.542 05:09:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73990' 00:10:52.542 05:09:42 -- common/autotest_common.sh@955 -- # kill 73990 00:10:52.542 05:09:42 -- common/autotest_common.sh@960 -- # wait 73990 00:10:52.800 05:09:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:52.800 05:09:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:52.800 05:09:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:52.800 05:09:42 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:52.800 05:09:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:52.800 05:09:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:52.800 05:09:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:52.800 05:09:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:53.058 05:09:42 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:53.058 ************************************ 00:10:53.058 END TEST nvmf_queue_depth 00:10:53.058 ************************************ 00:10:53.058 00:10:53.058 real 0m14.793s 00:10:53.058 user 0m22.227s 00:10:53.058 sys 0m2.500s 00:10:53.058 05:09:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:53.058 05:09:42 -- common/autotest_common.sh@10 -- # set +x 00:10:53.315 05:09:42 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:53.315 05:09:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:53.315 05:09:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:53.315 05:09:42 -- common/autotest_common.sh@10 -- # set +x 00:10:53.315 ************************************ 00:10:53.315 START TEST nvmf_multipath 00:10:53.315 ************************************ 00:10:53.315 05:09:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:53.572 * Looking for test storage... 00:10:53.572 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:53.572 05:09:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:53.572 05:09:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:53.572 05:09:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:53.572 05:09:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:53.572 05:09:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:53.572 05:09:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:53.572 05:09:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:53.572 05:09:43 -- scripts/common.sh@335 -- # IFS=.-: 00:10:53.572 05:09:43 -- scripts/common.sh@335 -- # read -ra ver1 00:10:53.572 05:09:43 -- scripts/common.sh@336 -- # IFS=.-: 00:10:53.572 05:09:43 -- scripts/common.sh@336 -- # read -ra ver2 00:10:53.572 05:09:43 -- scripts/common.sh@337 -- # local 'op=<' 00:10:53.572 05:09:43 -- scripts/common.sh@339 -- # ver1_l=2 00:10:53.572 05:09:43 -- scripts/common.sh@340 -- # ver2_l=1 00:10:53.572 05:09:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:53.572 05:09:43 -- scripts/common.sh@343 -- # case "$op" in 00:10:53.572 05:09:43 -- scripts/common.sh@344 -- # : 1 00:10:53.572 05:09:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:53.572 05:09:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:53.830 05:09:43 -- scripts/common.sh@364 -- # decimal 1 00:10:53.830 05:09:43 -- scripts/common.sh@352 -- # local d=1 00:10:53.830 05:09:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:53.830 05:09:43 -- scripts/common.sh@354 -- # echo 1 00:10:53.830 05:09:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:53.830 05:09:43 -- scripts/common.sh@365 -- # decimal 2 00:10:53.830 05:09:43 -- scripts/common.sh@352 -- # local d=2 00:10:53.830 05:09:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:53.830 05:09:43 -- scripts/common.sh@354 -- # echo 2 00:10:53.830 05:09:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:53.830 05:09:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:53.830 05:09:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:53.831 05:09:43 -- scripts/common.sh@367 -- # return 0 00:10:53.831 05:09:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:53.831 05:09:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:53.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.831 --rc genhtml_branch_coverage=1 00:10:53.831 --rc genhtml_function_coverage=1 00:10:53.831 --rc genhtml_legend=1 00:10:53.831 --rc geninfo_all_blocks=1 00:10:53.831 --rc geninfo_unexecuted_blocks=1 00:10:53.831 00:10:53.831 ' 00:10:53.831 05:09:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:53.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.831 --rc genhtml_branch_coverage=1 00:10:53.831 --rc genhtml_function_coverage=1 00:10:53.831 --rc genhtml_legend=1 00:10:53.831 --rc geninfo_all_blocks=1 00:10:53.831 --rc geninfo_unexecuted_blocks=1 00:10:53.831 00:10:53.831 ' 00:10:53.831 05:09:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:53.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.831 --rc genhtml_branch_coverage=1 00:10:53.831 --rc genhtml_function_coverage=1 00:10:53.831 --rc genhtml_legend=1 00:10:53.831 --rc geninfo_all_blocks=1 00:10:53.831 --rc geninfo_unexecuted_blocks=1 00:10:53.831 00:10:53.831 ' 00:10:53.831 05:09:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:53.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.831 --rc genhtml_branch_coverage=1 00:10:53.831 --rc genhtml_function_coverage=1 00:10:53.831 --rc genhtml_legend=1 00:10:53.831 --rc geninfo_all_blocks=1 00:10:53.831 --rc geninfo_unexecuted_blocks=1 00:10:53.831 00:10:53.831 ' 00:10:53.831 05:09:43 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:53.831 05:09:43 -- nvmf/common.sh@7 -- # uname -s 00:10:53.831 05:09:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:53.831 05:09:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:53.831 05:09:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:53.831 05:09:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:53.831 05:09:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:53.831 05:09:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:53.831 05:09:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:53.831 05:09:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:53.831 05:09:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:53.831 05:09:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:53.831 05:09:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:10:53.831 05:09:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:10:53.831 05:09:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:53.831 05:09:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:53.831 05:09:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:53.831 05:09:43 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:53.831 05:09:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:53.831 05:09:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:53.831 05:09:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:53.831 05:09:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.831 05:09:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.831 05:09:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.831 05:09:43 -- paths/export.sh@5 -- # export PATH 00:10:53.831 05:09:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.831 05:09:43 -- nvmf/common.sh@46 -- # : 0 00:10:53.831 05:09:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:53.831 05:09:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:53.831 05:09:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:53.831 05:09:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:53.831 05:09:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:53.831 05:09:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:53.831 05:09:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:53.831 05:09:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:53.831 05:09:43 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:53.831 05:09:43 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:53.831 05:09:43 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:53.831 05:09:43 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:53.831 05:09:43 -- target/multipath.sh@43 -- # nvmftestinit 00:10:53.831 05:09:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:53.831 05:09:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:53.831 05:09:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:53.831 05:09:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:53.831 05:09:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:53.831 05:09:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:53.831 05:09:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:53.831 05:09:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:53.831 05:09:43 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:53.831 05:09:43 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:53.831 05:09:43 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:53.831 05:09:43 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:53.831 05:09:43 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:53.831 05:09:43 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:53.831 05:09:43 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:53.831 05:09:43 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:53.831 05:09:43 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:53.831 05:09:43 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:53.831 05:09:43 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:53.831 05:09:43 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:53.831 05:09:43 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:53.831 05:09:43 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:53.831 05:09:43 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:53.831 05:09:43 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:53.831 05:09:43 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:53.831 05:09:43 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:53.831 05:09:43 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:53.831 05:09:43 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:53.831 Cannot find device "nvmf_tgt_br" 00:10:53.831 05:09:43 -- nvmf/common.sh@154 -- # true 00:10:53.831 05:09:43 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:53.831 Cannot find device "nvmf_tgt_br2" 00:10:53.831 05:09:43 -- nvmf/common.sh@155 -- # true 00:10:53.831 05:09:43 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:53.831 05:09:43 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:53.831 Cannot find device "nvmf_tgt_br" 00:10:53.831 05:09:43 -- nvmf/common.sh@157 -- # true 00:10:53.831 05:09:43 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:53.831 Cannot find device "nvmf_tgt_br2" 00:10:53.831 05:09:43 -- nvmf/common.sh@158 -- # true 00:10:53.831 05:09:43 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:54.090 05:09:43 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:54.090 05:09:43 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:54.090 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:54.090 05:09:43 -- nvmf/common.sh@161 -- # true 00:10:54.090 05:09:43 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:54.090 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:54.090 05:09:43 -- nvmf/common.sh@162 -- # true 00:10:54.090 05:09:43 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:54.090 05:09:43 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:54.090 05:09:43 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:54.090 05:09:43 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:54.348 05:09:43 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:54.348 05:09:43 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:54.348 05:09:44 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:54.348 05:09:44 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:54.348 05:09:44 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:54.348 05:09:44 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:54.348 05:09:44 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:54.348 05:09:44 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:54.348 05:09:44 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:54.348 05:09:44 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:54.610 05:09:44 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:54.610 05:09:44 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:54.610 05:09:44 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:54.610 05:09:44 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:54.610 05:09:44 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:54.610 05:09:44 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:54.610 05:09:44 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:54.610 05:09:44 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:54.610 05:09:44 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:54.610 05:09:44 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:54.610 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:54.610 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 00:10:54.610 00:10:54.610 --- 10.0.0.2 ping statistics --- 00:10:54.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:54.610 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:10:54.610 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:54.610 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:10:54.610 00:10:54.610 --- 10.0.0.3 ping statistics --- 00:10:54.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:54.610 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:10:54.610 05:09:44 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:54.610 05:09:44 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:54.610 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:54.610 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:10:54.610 00:10:54.610 --- 10.0.0.1 ping statistics --- 00:10:54.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:54.610 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:10:54.610 05:09:44 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:54.610 05:09:44 -- nvmf/common.sh@421 -- # return 0 00:10:54.610 05:09:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:54.610 05:09:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:54.610 05:09:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:54.610 05:09:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:54.610 05:09:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:54.610 05:09:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:54.610 05:09:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:54.870 05:09:44 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:10:54.870 05:09:44 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:10:54.870 05:09:44 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:10:54.870 05:09:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:54.870 05:09:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:54.870 05:09:44 -- common/autotest_common.sh@10 -- # set +x 00:10:54.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:54.870 05:09:44 -- nvmf/common.sh@469 -- # nvmfpid=74360 00:10:54.870 05:09:44 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:54.870 05:09:44 -- nvmf/common.sh@470 -- # waitforlisten 74360 00:10:54.870 05:09:44 -- common/autotest_common.sh@829 -- # '[' -z 74360 ']' 00:10:54.870 05:09:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:54.870 05:09:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:54.870 05:09:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:54.870 05:09:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:54.870 05:09:44 -- common/autotest_common.sh@10 -- # set +x 00:10:54.870 [2024-12-08 05:09:44.516814] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:54.870 [2024-12-08 05:09:44.516948] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:55.128 [2024-12-08 05:09:44.674452] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:55.128 [2024-12-08 05:09:44.724635] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:55.128 [2024-12-08 05:09:44.738696] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:55.128 [2024-12-08 05:09:44.743166] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:55.128 [2024-12-08 05:09:44.743226] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:55.128 [2024-12-08 05:09:44.743341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:55.128 [2024-12-08 05:09:44.746855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:55.128 [2024-12-08 05:09:44.755618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:55.128 [2024-12-08 05:09:44.763071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.386 05:09:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:55.386 05:09:44 -- common/autotest_common.sh@862 -- # return 0 00:10:55.386 05:09:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:55.386 05:09:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:55.386 05:09:44 -- common/autotest_common.sh@10 -- # set +x 00:10:55.386 05:09:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:55.386 05:09:45 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:55.951 [2024-12-08 05:09:45.618026] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:55.951 05:09:45 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:10:56.517 Malloc0 00:10:56.517 05:09:46 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:10:57.115 05:09:46 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:57.680 05:09:47 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:58.244 [2024-12-08 05:09:47.888886] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:58.244 05:09:47 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:58.809 [2024-12-08 05:09:48.379570] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:58.809 05:09:48 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 --hostid=bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:10:59.066 05:09:48 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 --hostid=bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:10:59.324 05:09:48 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:10:59.324 05:09:48 -- common/autotest_common.sh@1187 -- # local i=0 00:10:59.324 05:09:48 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:10:59.324 05:09:48 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:10:59.324 05:09:48 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:01.303 05:09:50 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:01.303 05:09:50 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:01.304 05:09:50 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:11:01.304 05:09:50 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:11:01.304 05:09:50 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:01.304 05:09:50 -- common/autotest_common.sh@1197 -- # return 0 00:11:01.304 05:09:50 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:11:01.304 05:09:50 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:11:01.304 05:09:50 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:11:01.304 05:09:50 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:01.304 05:09:50 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:11:01.304 05:09:50 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:11:01.304 05:09:50 -- target/multipath.sh@38 -- # return 0 00:11:01.304 05:09:50 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:11:01.304 05:09:50 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:11:01.304 05:09:50 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:11:01.304 05:09:50 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:11:01.304 05:09:50 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:11:01.304 05:09:50 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:11:01.304 05:09:50 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:11:01.304 05:09:50 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:11:01.304 05:09:50 -- target/multipath.sh@22 -- # local timeout=20 00:11:01.304 05:09:50 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:01.304 05:09:50 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:01.304 05:09:50 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:01.304 05:09:50 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:11:01.304 05:09:50 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:11:01.304 05:09:50 -- target/multipath.sh@22 -- # local timeout=20 00:11:01.304 05:09:50 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:01.304 05:09:50 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:01.304 05:09:50 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:01.304 05:09:50 -- target/multipath.sh@85 -- # echo numa 00:11:01.304 05:09:50 -- target/multipath.sh@88 -- # fio_pid=74463 00:11:01.304 05:09:50 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:11:01.304 05:09:50 -- target/multipath.sh@90 -- # sleep 1 00:11:01.304 [global] 00:11:01.304 thread=1 00:11:01.304 invalidate=1 00:11:01.304 rw=randrw 00:11:01.304 time_based=1 00:11:01.304 runtime=6 00:11:01.304 ioengine=libaio 00:11:01.304 direct=1 00:11:01.304 bs=4096 00:11:01.304 iodepth=128 00:11:01.304 norandommap=0 00:11:01.304 numjobs=1 00:11:01.304 00:11:01.304 verify_dump=1 00:11:01.304 verify_backlog=512 00:11:01.304 verify_state_save=0 00:11:01.304 do_verify=1 00:11:01.304 verify=crc32c-intel 00:11:01.304 [job0] 00:11:01.304 filename=/dev/nvme0n1 00:11:01.304 Could not set queue depth (nvme0n1) 00:11:01.561 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:01.561 fio-3.35 00:11:01.561 Starting 1 thread 00:11:02.494 05:09:51 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:11:02.753 05:09:52 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:11:03.319 05:09:52 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:11:03.319 05:09:52 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:11:03.319 05:09:52 -- target/multipath.sh@22 -- # local timeout=20 00:11:03.319 05:09:52 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:03.319 05:09:52 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:03.319 05:09:52 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:03.319 05:09:52 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:11:03.319 05:09:52 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:11:03.319 05:09:52 -- target/multipath.sh@22 -- # local timeout=20 00:11:03.320 05:09:52 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:03.320 05:09:52 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:03.320 05:09:52 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:03.320 05:09:52 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:11:03.889 05:09:53 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:11:04.455 05:09:54 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:11:04.455 05:09:54 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:11:04.455 05:09:54 -- target/multipath.sh@22 -- # local timeout=20 00:11:04.455 05:09:54 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:04.455 05:09:54 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:04.455 05:09:54 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:04.455 05:09:54 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:11:04.455 05:09:54 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:11:04.455 05:09:54 -- target/multipath.sh@22 -- # local timeout=20 00:11:04.455 05:09:54 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:04.455 05:09:54 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:04.455 05:09:54 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:04.455 05:09:54 -- target/multipath.sh@104 -- # wait 74463 00:11:07.733 00:11:07.733 job0: (groupid=0, jobs=1): err= 0: pid=74488: Sun Dec 8 05:09:57 2024 00:11:07.733 read: IOPS=7104, BW=27.8MiB/s (29.1MB/s)(167MiB/6008msec) 00:11:07.733 slat (usec): min=4, max=37439, avg=79.97, stdev=657.45 00:11:07.733 clat (usec): min=1115, max=54512, avg=12256.59, stdev=5559.47 00:11:07.733 lat (usec): min=1132, max=61925, avg=12336.55, stdev=5588.90 00:11:07.733 clat percentiles (usec): 00:11:07.733 | 1.00th=[ 4490], 5.00th=[ 6194], 10.00th=[ 7963], 20.00th=[ 9241], 00:11:07.733 | 30.00th=[ 9896], 40.00th=[10290], 50.00th=[10814], 60.00th=[11207], 00:11:07.733 | 70.00th=[11863], 80.00th=[14877], 90.00th=[18220], 95.00th=[23987], 00:11:07.733 | 99.00th=[35390], 99.50th=[39584], 99.90th=[47973], 99.95th=[48497], 00:11:07.733 | 99.99th=[54264] 00:11:07.733 bw ( KiB/s): min= 9560, max=22472, per=53.65%, avg=15248.00, stdev=4152.55, samples=11 00:11:07.734 iops : min= 2390, max= 5618, avg=3811.91, stdev=1038.21, samples=11 00:11:07.734 write: IOPS=4090, BW=16.0MiB/s (16.8MB/s)(90.5MiB/5663msec); 0 zone resets 00:11:07.734 slat (usec): min=14, max=24029, avg=86.09, stdev=464.24 00:11:07.734 clat (usec): min=959, max=54085, avg=10342.90, stdev=5344.75 00:11:07.734 lat (usec): min=999, max=58061, avg=10428.99, stdev=5360.33 00:11:07.734 clat percentiles (usec): 00:11:07.734 | 1.00th=[ 3654], 5.00th=[ 4555], 10.00th=[ 5473], 20.00th=[ 7504], 00:11:07.734 | 30.00th=[ 8455], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9765], 00:11:07.734 | 70.00th=[10290], 80.00th=[11076], 90.00th=[15664], 95.00th=[21365], 00:11:07.734 | 99.00th=[33162], 99.50th=[35390], 99.90th=[47973], 99.95th=[47973], 00:11:07.734 | 99.99th=[54264] 00:11:07.734 bw ( KiB/s): min= 9896, max=22016, per=93.03%, avg=15224.91, stdev=3887.45, samples=11 00:11:07.734 iops : min= 2474, max= 5504, avg=3806.09, stdev=971.98, samples=11 00:11:07.734 lat (usec) : 1000=0.01% 00:11:07.734 lat (msec) : 2=0.13%, 4=1.00%, 10=42.94%, 20=48.50%, 50=7.40% 00:11:07.734 lat (msec) : 100=0.02% 00:11:07.734 cpu : usr=4.99%, sys=21.11%, ctx=3036, majf=0, minf=54 00:11:07.734 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:11:07.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.734 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:07.734 issued rwts: total=42685,23167,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.734 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:07.734 00:11:07.734 Run status group 0 (all jobs): 00:11:07.734 READ: bw=27.8MiB/s (29.1MB/s), 27.8MiB/s-27.8MiB/s (29.1MB/s-29.1MB/s), io=167MiB (175MB), run=6008-6008msec 00:11:07.734 WRITE: bw=16.0MiB/s (16.8MB/s), 16.0MiB/s-16.0MiB/s (16.8MB/s-16.8MB/s), io=90.5MiB (94.9MB), run=5663-5663msec 00:11:07.734 00:11:07.734 Disk stats (read/write): 00:11:07.734 nvme0n1: ios=41943/22830, merge=0/0, ticks=463850/205071, in_queue=668921, util=97.39% 00:11:07.734 05:09:57 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:11:07.991 05:09:57 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:11:08.555 05:09:58 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:11:08.555 05:09:58 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:11:08.555 05:09:58 -- target/multipath.sh@22 -- # local timeout=20 00:11:08.555 05:09:58 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:08.555 05:09:58 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:08.555 05:09:58 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:08.555 05:09:58 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:11:08.555 05:09:58 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:11:08.555 05:09:58 -- target/multipath.sh@22 -- # local timeout=20 00:11:08.555 05:09:58 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:08.555 05:09:58 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:08.555 05:09:58 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:08.555 05:09:58 -- target/multipath.sh@113 -- # echo round-robin 00:11:08.555 05:09:58 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:11:08.555 05:09:58 -- target/multipath.sh@116 -- # fio_pid=74575 00:11:08.555 05:09:58 -- target/multipath.sh@118 -- # sleep 1 00:11:08.555 [global] 00:11:08.555 thread=1 00:11:08.555 invalidate=1 00:11:08.555 rw=randrw 00:11:08.555 time_based=1 00:11:08.555 runtime=6 00:11:08.555 ioengine=libaio 00:11:08.555 direct=1 00:11:08.555 bs=4096 00:11:08.555 iodepth=128 00:11:08.556 norandommap=0 00:11:08.556 numjobs=1 00:11:08.556 00:11:08.556 verify_dump=1 00:11:08.556 verify_backlog=512 00:11:08.556 verify_state_save=0 00:11:08.556 do_verify=1 00:11:08.556 verify=crc32c-intel 00:11:08.556 [job0] 00:11:08.556 filename=/dev/nvme0n1 00:11:08.556 Could not set queue depth (nvme0n1) 00:11:08.812 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:08.812 fio-3.35 00:11:08.812 Starting 1 thread 00:11:09.743 05:09:59 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:11:10.001 05:09:59 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:11:10.259 05:09:59 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:11:10.259 05:09:59 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:11:10.259 05:09:59 -- target/multipath.sh@22 -- # local timeout=20 00:11:10.260 05:09:59 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:10.260 05:09:59 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:10.260 05:09:59 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:10.260 05:09:59 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:11:10.260 05:09:59 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:11:10.260 05:09:59 -- target/multipath.sh@22 -- # local timeout=20 00:11:10.260 05:09:59 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:10.260 05:09:59 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:10.260 05:09:59 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:10.260 05:09:59 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:11:10.518 05:10:00 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:11:10.777 05:10:00 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:11:10.777 05:10:00 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:11:10.777 05:10:00 -- target/multipath.sh@22 -- # local timeout=20 00:11:10.777 05:10:00 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:10.777 05:10:00 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:10.777 05:10:00 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:10.777 05:10:00 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:11:10.777 05:10:00 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:11:10.777 05:10:00 -- target/multipath.sh@22 -- # local timeout=20 00:11:10.777 05:10:00 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:10.777 05:10:00 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:10.777 05:10:00 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:10.777 05:10:00 -- target/multipath.sh@132 -- # wait 74575 00:11:14.998 00:11:14.998 job0: (groupid=0, jobs=1): err= 0: pid=74597: Sun Dec 8 05:10:04 2024 00:11:14.998 read: IOPS=10.5k, BW=40.9MiB/s (42.9MB/s)(246MiB/6008msec) 00:11:14.998 slat (usec): min=4, max=6528, avg=45.33, stdev=192.33 00:11:14.998 clat (usec): min=286, max=22831, avg=8211.84, stdev=2800.90 00:11:14.998 lat (usec): min=310, max=22843, avg=8257.18, stdev=2809.45 00:11:14.998 clat percentiles (usec): 00:11:14.998 | 1.00th=[ 1156], 5.00th=[ 2704], 10.00th=[ 4621], 20.00th=[ 6783], 00:11:14.998 | 30.00th=[ 7504], 40.00th=[ 7832], 50.00th=[ 8160], 60.00th=[ 8586], 00:11:14.998 | 70.00th=[ 9110], 80.00th=[10028], 90.00th=[11338], 95.00th=[12387], 00:11:14.998 | 99.00th=[16909], 99.50th=[18220], 99.90th=[20055], 99.95th=[20317], 00:11:14.998 | 99.99th=[21890] 00:11:14.998 bw ( KiB/s): min= 4424, max=28416, per=54.79%, avg=22938.67, stdev=6720.89, samples=12 00:11:14.998 iops : min= 1106, max= 7104, avg=5734.67, stdev=1680.22, samples=12 00:11:14.998 write: IOPS=6338, BW=24.8MiB/s (26.0MB/s)(135MiB/5434msec); 0 zone resets 00:11:14.998 slat (usec): min=7, max=4304, avg=62.45, stdev=132.14 00:11:14.998 clat (usec): min=219, max=20704, avg=7144.42, stdev=2154.07 00:11:14.998 lat (usec): min=242, max=20759, avg=7206.87, stdev=2165.11 00:11:14.998 clat percentiles (usec): 00:11:14.998 | 1.00th=[ 1418], 5.00th=[ 3228], 10.00th=[ 4228], 20.00th=[ 5473], 00:11:14.998 | 30.00th=[ 6587], 40.00th=[ 7046], 50.00th=[ 7373], 60.00th=[ 7635], 00:11:14.998 | 70.00th=[ 8094], 80.00th=[ 8717], 90.00th=[ 9634], 95.00th=[10159], 00:11:14.998 | 99.00th=[12256], 99.50th=[13829], 99.90th=[18482], 99.95th=[19268], 00:11:14.998 | 99.99th=[20317] 00:11:14.998 bw ( KiB/s): min= 4840, max=29624, per=90.39%, avg=22920.00, stdev=6613.87, samples=12 00:11:14.999 iops : min= 1210, max= 7406, avg=5730.00, stdev=1653.47, samples=12 00:11:14.999 lat (usec) : 250=0.01%, 500=0.05%, 750=0.24%, 1000=0.25% 00:11:14.999 lat (msec) : 2=2.25%, 4=5.49%, 10=76.65%, 20=14.99%, 50=0.09% 00:11:14.999 cpu : usr=6.39%, sys=27.93%, ctx=5972, majf=0, minf=108 00:11:14.999 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:11:14.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.999 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:14.999 issued rwts: total=62882,34445,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.999 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:14.999 00:11:14.999 Run status group 0 (all jobs): 00:11:14.999 READ: bw=40.9MiB/s (42.9MB/s), 40.9MiB/s-40.9MiB/s (42.9MB/s-42.9MB/s), io=246MiB (258MB), run=6008-6008msec 00:11:14.999 WRITE: bw=24.8MiB/s (26.0MB/s), 24.8MiB/s-24.8MiB/s (26.0MB/s-26.0MB/s), io=135MiB (141MB), run=5434-5434msec 00:11:14.999 00:11:14.999 Disk stats (read/write): 00:11:14.999 nvme0n1: ios=62109/33967, merge=0/0, ticks=481603/222938, in_queue=704541, util=98.63% 00:11:14.999 05:10:04 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:14.999 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:14.999 05:10:04 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:14.999 05:10:04 -- common/autotest_common.sh@1208 -- # local i=0 00:11:14.999 05:10:04 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:14.999 05:10:04 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:14.999 05:10:04 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:14.999 05:10:04 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:14.999 05:10:04 -- common/autotest_common.sh@1220 -- # return 0 00:11:14.999 05:10:04 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:15.256 05:10:05 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:11:15.256 05:10:05 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:11:15.256 05:10:05 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:11:15.256 05:10:05 -- target/multipath.sh@144 -- # nvmftestfini 00:11:15.256 05:10:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:15.256 05:10:05 -- nvmf/common.sh@116 -- # sync 00:11:15.513 05:10:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:15.513 05:10:05 -- nvmf/common.sh@119 -- # set +e 00:11:15.513 05:10:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:15.513 05:10:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:15.513 rmmod nvme_tcp 00:11:15.513 rmmod nvme_fabrics 00:11:15.513 rmmod nvme_keyring 00:11:15.513 05:10:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:15.513 05:10:05 -- nvmf/common.sh@123 -- # set -e 00:11:15.513 05:10:05 -- nvmf/common.sh@124 -- # return 0 00:11:15.513 05:10:05 -- nvmf/common.sh@477 -- # '[' -n 74360 ']' 00:11:15.513 05:10:05 -- nvmf/common.sh@478 -- # killprocess 74360 00:11:15.513 05:10:05 -- common/autotest_common.sh@936 -- # '[' -z 74360 ']' 00:11:15.513 05:10:05 -- common/autotest_common.sh@940 -- # kill -0 74360 00:11:15.513 05:10:05 -- common/autotest_common.sh@941 -- # uname 00:11:15.513 05:10:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:15.513 05:10:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74360 00:11:15.513 killing process with pid 74360 00:11:15.513 05:10:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:15.513 05:10:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:15.513 05:10:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74360' 00:11:15.513 05:10:05 -- common/autotest_common.sh@955 -- # kill 74360 00:11:15.513 05:10:05 -- common/autotest_common.sh@960 -- # wait 74360 00:11:15.769 05:10:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:15.769 05:10:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:15.769 05:10:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:15.769 05:10:05 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:15.769 05:10:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:15.769 05:10:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:15.769 05:10:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:15.769 05:10:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:15.769 05:10:05 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:15.769 ************************************ 00:11:15.769 END TEST nvmf_multipath 00:11:15.769 ************************************ 00:11:15.769 00:11:15.769 real 0m22.382s 00:11:15.769 user 1m15.733s 00:11:15.769 sys 0m11.920s 00:11:15.769 05:10:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:15.769 05:10:05 -- common/autotest_common.sh@10 -- # set +x 00:11:15.769 05:10:05 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:15.769 05:10:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:15.769 05:10:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:15.769 05:10:05 -- common/autotest_common.sh@10 -- # set +x 00:11:15.769 ************************************ 00:11:15.769 START TEST nvmf_zcopy 00:11:15.769 ************************************ 00:11:15.769 05:10:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:15.769 * Looking for test storage... 00:11:15.769 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:15.769 05:10:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:15.769 05:10:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:15.769 05:10:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:16.026 05:10:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:16.026 05:10:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:16.026 05:10:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:16.026 05:10:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:16.026 05:10:05 -- scripts/common.sh@335 -- # IFS=.-: 00:11:16.026 05:10:05 -- scripts/common.sh@335 -- # read -ra ver1 00:11:16.026 05:10:05 -- scripts/common.sh@336 -- # IFS=.-: 00:11:16.026 05:10:05 -- scripts/common.sh@336 -- # read -ra ver2 00:11:16.026 05:10:05 -- scripts/common.sh@337 -- # local 'op=<' 00:11:16.026 05:10:05 -- scripts/common.sh@339 -- # ver1_l=2 00:11:16.027 05:10:05 -- scripts/common.sh@340 -- # ver2_l=1 00:11:16.027 05:10:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:16.027 05:10:05 -- scripts/common.sh@343 -- # case "$op" in 00:11:16.027 05:10:05 -- scripts/common.sh@344 -- # : 1 00:11:16.027 05:10:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:16.027 05:10:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:16.027 05:10:05 -- scripts/common.sh@364 -- # decimal 1 00:11:16.027 05:10:05 -- scripts/common.sh@352 -- # local d=1 00:11:16.027 05:10:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:16.027 05:10:05 -- scripts/common.sh@354 -- # echo 1 00:11:16.027 05:10:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:16.027 05:10:05 -- scripts/common.sh@365 -- # decimal 2 00:11:16.027 05:10:05 -- scripts/common.sh@352 -- # local d=2 00:11:16.027 05:10:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:16.027 05:10:05 -- scripts/common.sh@354 -- # echo 2 00:11:16.027 05:10:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:16.027 05:10:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:16.027 05:10:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:16.027 05:10:05 -- scripts/common.sh@367 -- # return 0 00:11:16.027 05:10:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:16.027 05:10:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:16.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.027 --rc genhtml_branch_coverage=1 00:11:16.027 --rc genhtml_function_coverage=1 00:11:16.027 --rc genhtml_legend=1 00:11:16.027 --rc geninfo_all_blocks=1 00:11:16.027 --rc geninfo_unexecuted_blocks=1 00:11:16.027 00:11:16.027 ' 00:11:16.027 05:10:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:16.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.027 --rc genhtml_branch_coverage=1 00:11:16.027 --rc genhtml_function_coverage=1 00:11:16.027 --rc genhtml_legend=1 00:11:16.027 --rc geninfo_all_blocks=1 00:11:16.027 --rc geninfo_unexecuted_blocks=1 00:11:16.027 00:11:16.027 ' 00:11:16.027 05:10:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:16.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.027 --rc genhtml_branch_coverage=1 00:11:16.027 --rc genhtml_function_coverage=1 00:11:16.027 --rc genhtml_legend=1 00:11:16.027 --rc geninfo_all_blocks=1 00:11:16.027 --rc geninfo_unexecuted_blocks=1 00:11:16.027 00:11:16.027 ' 00:11:16.027 05:10:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:16.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.027 --rc genhtml_branch_coverage=1 00:11:16.027 --rc genhtml_function_coverage=1 00:11:16.027 --rc genhtml_legend=1 00:11:16.027 --rc geninfo_all_blocks=1 00:11:16.027 --rc geninfo_unexecuted_blocks=1 00:11:16.027 00:11:16.027 ' 00:11:16.027 05:10:05 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:16.027 05:10:05 -- nvmf/common.sh@7 -- # uname -s 00:11:16.027 05:10:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:16.027 05:10:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:16.027 05:10:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:16.027 05:10:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:16.027 05:10:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:16.027 05:10:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:16.027 05:10:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:16.027 05:10:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:16.027 05:10:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:16.027 05:10:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:16.027 05:10:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:11:16.027 05:10:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:11:16.027 05:10:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:16.027 05:10:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:16.027 05:10:05 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:16.027 05:10:05 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:16.027 05:10:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:16.027 05:10:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:16.027 05:10:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:16.027 05:10:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.027 05:10:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.027 05:10:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.027 05:10:05 -- paths/export.sh@5 -- # export PATH 00:11:16.027 05:10:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.027 05:10:05 -- nvmf/common.sh@46 -- # : 0 00:11:16.027 05:10:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:16.027 05:10:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:16.027 05:10:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:16.027 05:10:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:16.027 05:10:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:16.027 05:10:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:16.027 05:10:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:16.027 05:10:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:16.027 05:10:05 -- target/zcopy.sh@12 -- # nvmftestinit 00:11:16.027 05:10:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:16.027 05:10:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:16.027 05:10:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:16.027 05:10:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:16.027 05:10:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:16.027 05:10:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.027 05:10:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:16.027 05:10:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:16.027 05:10:05 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:16.027 05:10:05 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:16.027 05:10:05 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:16.027 05:10:05 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:16.027 05:10:05 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:16.027 05:10:05 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:16.027 05:10:05 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:16.027 05:10:05 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:16.027 05:10:05 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:16.027 05:10:05 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:16.027 05:10:05 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:16.027 05:10:05 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:16.027 05:10:05 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:16.027 05:10:05 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:16.027 05:10:05 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:16.027 05:10:05 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:16.027 05:10:05 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:16.027 05:10:05 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:16.027 05:10:05 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:16.027 05:10:05 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:16.027 Cannot find device "nvmf_tgt_br" 00:11:16.027 05:10:05 -- nvmf/common.sh@154 -- # true 00:11:16.027 05:10:05 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:16.027 Cannot find device "nvmf_tgt_br2" 00:11:16.027 05:10:05 -- nvmf/common.sh@155 -- # true 00:11:16.027 05:10:05 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:16.027 05:10:05 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:16.027 Cannot find device "nvmf_tgt_br" 00:11:16.027 05:10:05 -- nvmf/common.sh@157 -- # true 00:11:16.027 05:10:05 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:16.027 Cannot find device "nvmf_tgt_br2" 00:11:16.027 05:10:05 -- nvmf/common.sh@158 -- # true 00:11:16.027 05:10:05 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:16.027 05:10:05 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:16.027 05:10:05 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:16.027 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:16.027 05:10:05 -- nvmf/common.sh@161 -- # true 00:11:16.027 05:10:05 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:16.027 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:16.027 05:10:05 -- nvmf/common.sh@162 -- # true 00:11:16.027 05:10:05 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:16.027 05:10:05 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:16.027 05:10:05 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:16.027 05:10:05 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:16.027 05:10:05 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:16.028 05:10:05 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:16.285 05:10:05 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:16.285 05:10:05 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:16.285 05:10:05 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:16.285 05:10:05 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:16.285 05:10:05 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:16.285 05:10:05 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:16.285 05:10:05 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:16.285 05:10:05 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:16.285 05:10:05 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:16.285 05:10:05 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:16.285 05:10:05 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:16.285 05:10:05 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:16.285 05:10:05 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:16.285 05:10:05 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:16.285 05:10:05 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:16.285 05:10:05 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:16.285 05:10:05 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:16.285 05:10:05 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:16.285 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:16.285 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:11:16.285 00:11:16.285 --- 10.0.0.2 ping statistics --- 00:11:16.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.285 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:11:16.285 05:10:05 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:16.285 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:16.285 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:11:16.285 00:11:16.285 --- 10.0.0.3 ping statistics --- 00:11:16.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.285 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:11:16.285 05:10:05 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:16.285 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:16.285 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:11:16.285 00:11:16.285 --- 10.0.0.1 ping statistics --- 00:11:16.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.285 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:11:16.285 05:10:05 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:16.285 05:10:05 -- nvmf/common.sh@421 -- # return 0 00:11:16.285 05:10:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:16.285 05:10:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:16.285 05:10:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:16.285 05:10:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:16.285 05:10:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:16.285 05:10:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:16.285 05:10:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:16.285 05:10:05 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:11:16.285 05:10:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:16.285 05:10:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:16.285 05:10:05 -- common/autotest_common.sh@10 -- # set +x 00:11:16.285 05:10:06 -- nvmf/common.sh@469 -- # nvmfpid=74876 00:11:16.285 05:10:06 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:16.285 05:10:06 -- nvmf/common.sh@470 -- # waitforlisten 74876 00:11:16.285 05:10:06 -- common/autotest_common.sh@829 -- # '[' -z 74876 ']' 00:11:16.285 05:10:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.285 05:10:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:16.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.285 05:10:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.285 05:10:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:16.285 05:10:06 -- common/autotest_common.sh@10 -- # set +x 00:11:16.543 [2024-12-08 05:10:06.070726] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:16.543 [2024-12-08 05:10:06.070874] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:16.543 [2024-12-08 05:10:06.213935] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.543 [2024-12-08 05:10:06.254135] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:16.543 [2024-12-08 05:10:06.254336] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:16.543 [2024-12-08 05:10:06.254360] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:16.543 [2024-12-08 05:10:06.254375] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:16.543 [2024-12-08 05:10:06.254420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:17.475 05:10:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:17.475 05:10:07 -- common/autotest_common.sh@862 -- # return 0 00:11:17.475 05:10:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:17.475 05:10:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:17.475 05:10:07 -- common/autotest_common.sh@10 -- # set +x 00:11:17.475 05:10:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:17.475 05:10:07 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:11:17.475 05:10:07 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:11:17.475 05:10:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.475 05:10:07 -- common/autotest_common.sh@10 -- # set +x 00:11:17.475 [2024-12-08 05:10:07.174839] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:17.475 05:10:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.475 05:10:07 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:17.475 05:10:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.475 05:10:07 -- common/autotest_common.sh@10 -- # set +x 00:11:17.475 05:10:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.475 05:10:07 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:17.475 05:10:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.475 05:10:07 -- common/autotest_common.sh@10 -- # set +x 00:11:17.475 [2024-12-08 05:10:07.190986] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:17.475 05:10:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.475 05:10:07 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:17.475 05:10:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.475 05:10:07 -- common/autotest_common.sh@10 -- # set +x 00:11:17.475 05:10:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.475 05:10:07 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:11:17.475 05:10:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.475 05:10:07 -- common/autotest_common.sh@10 -- # set +x 00:11:17.475 malloc0 00:11:17.475 05:10:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.475 05:10:07 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:17.475 05:10:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.475 05:10:07 -- common/autotest_common.sh@10 -- # set +x 00:11:17.475 05:10:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.475 05:10:07 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:11:17.476 05:10:07 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:11:17.476 05:10:07 -- nvmf/common.sh@520 -- # config=() 00:11:17.476 05:10:07 -- nvmf/common.sh@520 -- # local subsystem config 00:11:17.476 05:10:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:11:17.476 05:10:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:11:17.476 { 00:11:17.476 "params": { 00:11:17.476 "name": "Nvme$subsystem", 00:11:17.476 "trtype": "$TEST_TRANSPORT", 00:11:17.476 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:17.476 "adrfam": "ipv4", 00:11:17.476 "trsvcid": "$NVMF_PORT", 00:11:17.476 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:17.476 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:17.476 "hdgst": ${hdgst:-false}, 00:11:17.476 "ddgst": ${ddgst:-false} 00:11:17.476 }, 00:11:17.476 "method": "bdev_nvme_attach_controller" 00:11:17.476 } 00:11:17.476 EOF 00:11:17.476 )") 00:11:17.476 05:10:07 -- nvmf/common.sh@542 -- # cat 00:11:17.476 05:10:07 -- nvmf/common.sh@544 -- # jq . 00:11:17.476 05:10:07 -- nvmf/common.sh@545 -- # IFS=, 00:11:17.476 05:10:07 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:11:17.476 "params": { 00:11:17.476 "name": "Nvme1", 00:11:17.476 "trtype": "tcp", 00:11:17.476 "traddr": "10.0.0.2", 00:11:17.476 "adrfam": "ipv4", 00:11:17.476 "trsvcid": "4420", 00:11:17.476 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:17.476 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:17.476 "hdgst": false, 00:11:17.476 "ddgst": false 00:11:17.476 }, 00:11:17.476 "method": "bdev_nvme_attach_controller" 00:11:17.476 }' 00:11:17.732 [2024-12-08 05:10:07.268830] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:17.732 [2024-12-08 05:10:07.268933] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74909 ] 00:11:17.732 [2024-12-08 05:10:07.402105] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:17.732 [2024-12-08 05:10:07.443996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.989 Running I/O for 10 seconds... 00:11:27.953 00:11:27.953 Latency(us) 00:11:27.953 [2024-12-08T05:10:17.739Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:27.953 [2024-12-08T05:10:17.739Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:11:27.953 Verification LBA range: start 0x0 length 0x1000 00:11:27.953 Nvme1n1 : 10.01 8740.56 68.29 0.00 0.00 14605.62 1697.98 22282.24 00:11:27.953 [2024-12-08T05:10:17.739Z] =================================================================================================================== 00:11:27.953 [2024-12-08T05:10:17.739Z] Total : 8740.56 68.29 0.00 0.00 14605.62 1697.98 22282.24 00:11:28.211 05:10:17 -- target/zcopy.sh@39 -- # perfpid=75022 00:11:28.211 05:10:17 -- target/zcopy.sh@41 -- # xtrace_disable 00:11:28.211 05:10:17 -- common/autotest_common.sh@10 -- # set +x 00:11:28.211 05:10:17 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:11:28.211 05:10:17 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:11:28.211 05:10:17 -- nvmf/common.sh@520 -- # config=() 00:11:28.211 05:10:17 -- nvmf/common.sh@520 -- # local subsystem config 00:11:28.211 05:10:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:11:28.211 05:10:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:11:28.211 { 00:11:28.211 "params": { 00:11:28.211 "name": "Nvme$subsystem", 00:11:28.211 "trtype": "$TEST_TRANSPORT", 00:11:28.211 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:28.211 "adrfam": "ipv4", 00:11:28.211 "trsvcid": "$NVMF_PORT", 00:11:28.211 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:28.211 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:28.211 "hdgst": ${hdgst:-false}, 00:11:28.211 "ddgst": ${ddgst:-false} 00:11:28.211 }, 00:11:28.211 "method": "bdev_nvme_attach_controller" 00:11:28.211 } 00:11:28.211 EOF 00:11:28.211 )") 00:11:28.211 05:10:17 -- nvmf/common.sh@542 -- # cat 00:11:28.211 [2024-12-08 05:10:17.753927] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.211 [2024-12-08 05:10:17.753989] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.211 05:10:17 -- nvmf/common.sh@544 -- # jq . 00:11:28.211 05:10:17 -- nvmf/common.sh@545 -- # IFS=, 00:11:28.211 05:10:17 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:11:28.211 "params": { 00:11:28.211 "name": "Nvme1", 00:11:28.211 "trtype": "tcp", 00:11:28.211 "traddr": "10.0.0.2", 00:11:28.211 "adrfam": "ipv4", 00:11:28.211 "trsvcid": "4420", 00:11:28.211 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:28.211 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:28.211 "hdgst": false, 00:11:28.211 "ddgst": false 00:11:28.211 }, 00:11:28.211 "method": "bdev_nvme_attach_controller" 00:11:28.211 }' 00:11:28.211 [2024-12-08 05:10:17.761908] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.211 [2024-12-08 05:10:17.761968] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.211 [2024-12-08 05:10:17.769901] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.211 [2024-12-08 05:10:17.769962] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.211 [2024-12-08 05:10:17.781925] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.211 [2024-12-08 05:10:17.781998] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.211 [2024-12-08 05:10:17.793945] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.211 [2024-12-08 05:10:17.794017] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.211 [2024-12-08 05:10:17.805925] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.211 [2024-12-08 05:10:17.805996] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.211 [2024-12-08 05:10:17.807070] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:28.211 [2024-12-08 05:10:17.807753] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75022 ] 00:11:28.211 [2024-12-08 05:10:17.817962] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.211 [2024-12-08 05:10:17.818030] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.211 [2024-12-08 05:10:17.829967] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.211 [2024-12-08 05:10:17.830040] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.211 [2024-12-08 05:10:17.841909] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.211 [2024-12-08 05:10:17.841965] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.211 [2024-12-08 05:10:17.853897] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.211 [2024-12-08 05:10:17.853949] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.211 [2024-12-08 05:10:17.865921] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.211 [2024-12-08 05:10:17.865982] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.211 [2024-12-08 05:10:17.877927] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.211 [2024-12-08 05:10:17.877984] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.211 [2024-12-08 05:10:17.889916] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.211 [2024-12-08 05:10:17.889971] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.211 [2024-12-08 05:10:17.901924] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.211 [2024-12-08 05:10:17.901978] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.211 [2024-12-08 05:10:17.913917] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.211 [2024-12-08 05:10:17.913969] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.211 [2024-12-08 05:10:17.925922] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.211 [2024-12-08 05:10:17.925971] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.211 [2024-12-08 05:10:17.937940] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.211 [2024-12-08 05:10:17.937991] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.211 [2024-12-08 05:10:17.945920] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.211 [2024-12-08 05:10:17.945967] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.211 [2024-12-08 05:10:17.948748] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.211 [2024-12-08 05:10:17.953926] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.211 [2024-12-08 05:10:17.953975] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.211 [2024-12-08 05:10:17.965945] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.211 [2024-12-08 05:10:17.966002] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.211 [2024-12-08 05:10:17.977993] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.211 [2024-12-08 05:10:17.978067] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.211 [2024-12-08 05:10:17.985936] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.211 [2024-12-08 05:10:17.985987] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.211 [2024-12-08 05:10:17.990555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.211 [2024-12-08 05:10:17.993947] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.211 [2024-12-08 05:10:17.993990] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.470 [2024-12-08 05:10:18.001968] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.470 [2024-12-08 05:10:18.002017] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.470 [2024-12-08 05:10:18.013995] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.470 [2024-12-08 05:10:18.014059] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.470 [2024-12-08 05:10:18.025977] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.470 [2024-12-08 05:10:18.026037] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.470 [2024-12-08 05:10:18.033963] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.470 [2024-12-08 05:10:18.034012] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.470 [2024-12-08 05:10:18.041967] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.470 [2024-12-08 05:10:18.042022] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.470 [2024-12-08 05:10:18.049974] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.470 [2024-12-08 05:10:18.050029] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.470 [2024-12-08 05:10:18.061986] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.470 [2024-12-08 05:10:18.062043] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.470 [2024-12-08 05:10:18.073999] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.470 [2024-12-08 05:10:18.074054] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.470 [2024-12-08 05:10:18.086015] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.470 [2024-12-08 05:10:18.086073] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.470 [2024-12-08 05:10:18.098027] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.470 [2024-12-08 05:10:18.098087] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.470 [2024-12-08 05:10:18.110034] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.470 [2024-12-08 05:10:18.110091] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.470 [2024-12-08 05:10:18.122036] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.470 [2024-12-08 05:10:18.122093] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.470 Running I/O for 5 seconds... 00:11:28.470 [2024-12-08 05:10:18.134049] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.470 [2024-12-08 05:10:18.134098] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.470 [2024-12-08 05:10:18.146842] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.470 [2024-12-08 05:10:18.146906] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.470 [2024-12-08 05:10:18.164405] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.470 [2024-12-08 05:10:18.164473] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.470 [2024-12-08 05:10:18.179325] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.470 [2024-12-08 05:10:18.179395] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.470 [2024-12-08 05:10:18.189180] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.470 [2024-12-08 05:10:18.189243] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.470 [2024-12-08 05:10:18.204330] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.470 [2024-12-08 05:10:18.204402] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.470 [2024-12-08 05:10:18.221349] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.470 [2024-12-08 05:10:18.221422] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.470 [2024-12-08 05:10:18.238367] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.470 [2024-12-08 05:10:18.238441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.470 [2024-12-08 05:10:18.253786] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.470 [2024-12-08 05:10:18.253854] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.728 [2024-12-08 05:10:18.271122] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.728 [2024-12-08 05:10:18.271190] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.728 [2024-12-08 05:10:18.288446] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.728 [2024-12-08 05:10:18.288518] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.728 [2024-12-08 05:10:18.303608] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.728 [2024-12-08 05:10:18.303697] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.728 [2024-12-08 05:10:18.312957] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.728 [2024-12-08 05:10:18.313019] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.728 [2024-12-08 05:10:18.330602] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.728 [2024-12-08 05:10:18.330698] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.728 [2024-12-08 05:10:18.346095] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.728 [2024-12-08 05:10:18.346194] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.728 [2024-12-08 05:10:18.364074] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.728 [2024-12-08 05:10:18.364142] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.729 [2024-12-08 05:10:18.380649] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.729 [2024-12-08 05:10:18.380729] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.729 [2024-12-08 05:10:18.396818] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.729 [2024-12-08 05:10:18.396883] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.729 [2024-12-08 05:10:18.414220] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.729 [2024-12-08 05:10:18.414277] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.729 [2024-12-08 05:10:18.435694] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.729 [2024-12-08 05:10:18.435765] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.729 [2024-12-08 05:10:18.450452] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.729 [2024-12-08 05:10:18.450513] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.729 [2024-12-08 05:10:18.460518] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.729 [2024-12-08 05:10:18.460583] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.729 [2024-12-08 05:10:18.475881] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.729 [2024-12-08 05:10:18.475943] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.729 [2024-12-08 05:10:18.492539] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.729 [2024-12-08 05:10:18.492609] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.729 [2024-12-08 05:10:18.509121] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.729 [2024-12-08 05:10:18.509189] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.987 [2024-12-08 05:10:18.525909] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.987 [2024-12-08 05:10:18.525978] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.987 [2024-12-08 05:10:18.543047] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.987 [2024-12-08 05:10:18.543112] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.987 [2024-12-08 05:10:18.559233] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.987 [2024-12-08 05:10:18.559300] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.987 [2024-12-08 05:10:18.576157] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.987 [2024-12-08 05:10:18.576227] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.987 [2024-12-08 05:10:18.591971] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.987 [2024-12-08 05:10:18.592044] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.987 [2024-12-08 05:10:18.610249] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.987 [2024-12-08 05:10:18.610318] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.987 [2024-12-08 05:10:18.626000] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.987 [2024-12-08 05:10:18.626073] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.987 [2024-12-08 05:10:18.642388] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.987 [2024-12-08 05:10:18.642459] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.987 [2024-12-08 05:10:18.658749] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.987 [2024-12-08 05:10:18.658829] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.987 [2024-12-08 05:10:18.677548] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.987 [2024-12-08 05:10:18.677618] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.987 [2024-12-08 05:10:18.692308] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.987 [2024-12-08 05:10:18.692375] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.987 [2024-12-08 05:10:18.701221] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.987 [2024-12-08 05:10:18.701285] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.987 [2024-12-08 05:10:18.717384] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.987 [2024-12-08 05:10:18.717451] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.987 [2024-12-08 05:10:18.726488] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.987 [2024-12-08 05:10:18.726545] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.987 [2024-12-08 05:10:18.743082] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.987 [2024-12-08 05:10:18.743149] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.987 [2024-12-08 05:10:18.761828] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.987 [2024-12-08 05:10:18.761895] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.245 [2024-12-08 05:10:18.776626] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.245 [2024-12-08 05:10:18.776708] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.245 [2024-12-08 05:10:18.792495] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.245 [2024-12-08 05:10:18.792563] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.245 [2024-12-08 05:10:18.801902] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.245 [2024-12-08 05:10:18.801957] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.245 [2024-12-08 05:10:18.813220] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.245 [2024-12-08 05:10:18.813279] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.245 [2024-12-08 05:10:18.830743] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.245 [2024-12-08 05:10:18.830819] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.245 [2024-12-08 05:10:18.847732] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.245 [2024-12-08 05:10:18.847797] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.245 [2024-12-08 05:10:18.857473] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.245 [2024-12-08 05:10:18.857530] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.245 [2024-12-08 05:10:18.868727] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.245 [2024-12-08 05:10:18.868783] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.245 [2024-12-08 05:10:18.883903] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.245 [2024-12-08 05:10:18.883965] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.245 [2024-12-08 05:10:18.901313] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.245 [2024-12-08 05:10:18.901378] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.245 [2024-12-08 05:10:18.915908] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.245 [2024-12-08 05:10:18.915968] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.245 [2024-12-08 05:10:18.932731] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.245 [2024-12-08 05:10:18.932797] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.245 [2024-12-08 05:10:18.947708] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.245 [2024-12-08 05:10:18.947773] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.245 [2024-12-08 05:10:18.956662] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.245 [2024-12-08 05:10:18.956734] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.245 [2024-12-08 05:10:18.969693] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.245 [2024-12-08 05:10:18.969754] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.245 [2024-12-08 05:10:18.986395] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.245 [2024-12-08 05:10:18.986464] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.245 [2024-12-08 05:10:18.996492] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.245 [2024-12-08 05:10:18.996548] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.245 [2024-12-08 05:10:19.008188] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.245 [2024-12-08 05:10:19.008251] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.245 [2024-12-08 05:10:19.022904] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.245 [2024-12-08 05:10:19.022971] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.503 [2024-12-08 05:10:19.033023] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.503 [2024-12-08 05:10:19.033083] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.503 [2024-12-08 05:10:19.044596] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.503 [2024-12-08 05:10:19.044657] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.503 [2024-12-08 05:10:19.060468] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.504 [2024-12-08 05:10:19.060533] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.504 [2024-12-08 05:10:19.079037] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.504 [2024-12-08 05:10:19.079107] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.504 [2024-12-08 05:10:19.093498] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.504 [2024-12-08 05:10:19.093564] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.504 [2024-12-08 05:10:19.103109] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.504 [2024-12-08 05:10:19.103165] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.504 [2024-12-08 05:10:19.114922] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.504 [2024-12-08 05:10:19.114984] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.504 [2024-12-08 05:10:19.129369] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.504 [2024-12-08 05:10:19.129437] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.504 [2024-12-08 05:10:19.146444] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.504 [2024-12-08 05:10:19.146513] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.504 [2024-12-08 05:10:19.160884] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.504 [2024-12-08 05:10:19.160953] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.504 [2024-12-08 05:10:19.169587] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.504 [2024-12-08 05:10:19.169646] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.504 [2024-12-08 05:10:19.181522] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.504 [2024-12-08 05:10:19.181586] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.504 [2024-12-08 05:10:19.197123] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.504 [2024-12-08 05:10:19.197192] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.504 [2024-12-08 05:10:19.206804] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.504 [2024-12-08 05:10:19.206860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.504 [2024-12-08 05:10:19.221814] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.504 [2024-12-08 05:10:19.221882] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.504 [2024-12-08 05:10:19.233668] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.504 [2024-12-08 05:10:19.233743] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.504 [2024-12-08 05:10:19.242595] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.504 [2024-12-08 05:10:19.242654] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.504 [2024-12-08 05:10:19.255237] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.504 [2024-12-08 05:10:19.255301] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.504 [2024-12-08 05:10:19.265635] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.504 [2024-12-08 05:10:19.265710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.504 [2024-12-08 05:10:19.280155] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.504 [2024-12-08 05:10:19.280223] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.763 [2024-12-08 05:10:19.297382] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.763 [2024-12-08 05:10:19.297451] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.763 [2024-12-08 05:10:19.307139] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.763 [2024-12-08 05:10:19.307199] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.763 [2024-12-08 05:10:19.318461] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.763 [2024-12-08 05:10:19.318523] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.763 [2024-12-08 05:10:19.329102] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.763 [2024-12-08 05:10:19.329164] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.763 [2024-12-08 05:10:19.343152] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.763 [2024-12-08 05:10:19.343218] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.763 [2024-12-08 05:10:19.358489] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.763 [2024-12-08 05:10:19.358559] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.763 [2024-12-08 05:10:19.367648] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.763 [2024-12-08 05:10:19.367723] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.763 [2024-12-08 05:10:19.383568] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.763 [2024-12-08 05:10:19.383632] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.763 [2024-12-08 05:10:19.393296] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.763 [2024-12-08 05:10:19.393362] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.763 [2024-12-08 05:10:19.409783] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.763 [2024-12-08 05:10:19.409852] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.763 [2024-12-08 05:10:19.426723] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.763 [2024-12-08 05:10:19.426800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.763 [2024-12-08 05:10:19.443953] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.763 [2024-12-08 05:10:19.444022] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.763 [2024-12-08 05:10:19.460330] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.763 [2024-12-08 05:10:19.460397] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.763 [2024-12-08 05:10:19.476225] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.763 [2024-12-08 05:10:19.476293] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.763 [2024-12-08 05:10:19.494618] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.763 [2024-12-08 05:10:19.494688] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.763 [2024-12-08 05:10:19.505357] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.763 [2024-12-08 05:10:19.505420] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.763 [2024-12-08 05:10:19.519038] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.763 [2024-12-08 05:10:19.519097] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.763 [2024-12-08 05:10:19.529472] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.763 [2024-12-08 05:10:19.529532] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.763 [2024-12-08 05:10:19.540452] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.763 [2024-12-08 05:10:19.540510] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.022 [2024-12-08 05:10:19.557396] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.022 [2024-12-08 05:10:19.557459] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.022 [2024-12-08 05:10:19.574320] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.022 [2024-12-08 05:10:19.574383] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.022 [2024-12-08 05:10:19.590171] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.022 [2024-12-08 05:10:19.590234] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.022 [2024-12-08 05:10:19.609128] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.022 [2024-12-08 05:10:19.609196] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.022 [2024-12-08 05:10:19.619481] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.022 [2024-12-08 05:10:19.619541] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.022 [2024-12-08 05:10:19.633831] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.022 [2024-12-08 05:10:19.633897] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.022 [2024-12-08 05:10:19.643446] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.022 [2024-12-08 05:10:19.643504] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.022 [2024-12-08 05:10:19.654906] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.022 [2024-12-08 05:10:19.654971] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.022 [2024-12-08 05:10:19.669542] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.022 [2024-12-08 05:10:19.669607] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.022 [2024-12-08 05:10:19.679436] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.022 [2024-12-08 05:10:19.679493] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.022 [2024-12-08 05:10:19.694007] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.023 [2024-12-08 05:10:19.694070] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.023 [2024-12-08 05:10:19.704071] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.023 [2024-12-08 05:10:19.704126] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.023 [2024-12-08 05:10:19.719172] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.023 [2024-12-08 05:10:19.719235] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.023 [2024-12-08 05:10:19.737884] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.023 [2024-12-08 05:10:19.737950] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.023 [2024-12-08 05:10:19.752256] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.023 [2024-12-08 05:10:19.752316] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.023 [2024-12-08 05:10:19.767560] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.023 [2024-12-08 05:10:19.767630] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.023 [2024-12-08 05:10:19.779036] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.023 [2024-12-08 05:10:19.779110] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.023 [2024-12-08 05:10:19.792192] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.023 [2024-12-08 05:10:19.792255] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.023 [2024-12-08 05:10:19.802995] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.023 [2024-12-08 05:10:19.803049] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.281 [2024-12-08 05:10:19.813595] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.281 [2024-12-08 05:10:19.813653] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.281 [2024-12-08 05:10:19.831000] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.281 [2024-12-08 05:10:19.831064] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.281 [2024-12-08 05:10:19.847218] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.281 [2024-12-08 05:10:19.847280] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.281 [2024-12-08 05:10:19.864254] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.281 [2024-12-08 05:10:19.864320] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.281 [2024-12-08 05:10:19.880082] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.281 [2024-12-08 05:10:19.880152] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.281 [2024-12-08 05:10:19.888971] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.281 [2024-12-08 05:10:19.889025] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.281 [2024-12-08 05:10:19.905798] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.281 [2024-12-08 05:10:19.905866] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.281 [2024-12-08 05:10:19.915319] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.281 [2024-12-08 05:10:19.915380] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.281 [2024-12-08 05:10:19.928135] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.281 [2024-12-08 05:10:19.928196] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.282 [2024-12-08 05:10:19.944418] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.282 [2024-12-08 05:10:19.944486] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.282 [2024-12-08 05:10:19.961561] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.282 [2024-12-08 05:10:19.961639] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.282 [2024-12-08 05:10:19.977768] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.282 [2024-12-08 05:10:19.977835] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.282 [2024-12-08 05:10:19.987574] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.282 [2024-12-08 05:10:19.987625] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.282 [2024-12-08 05:10:19.998855] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.282 [2024-12-08 05:10:19.998911] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.282 [2024-12-08 05:10:20.016632] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.282 [2024-12-08 05:10:20.016712] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.282 [2024-12-08 05:10:20.032462] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.282 [2024-12-08 05:10:20.032516] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.282 [2024-12-08 05:10:20.042065] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.282 [2024-12-08 05:10:20.042120] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.282 [2024-12-08 05:10:20.058573] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.282 [2024-12-08 05:10:20.058646] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.539 [2024-12-08 05:10:20.073915] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.539 [2024-12-08 05:10:20.073986] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.539 [2024-12-08 05:10:20.083779] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.539 [2024-12-08 05:10:20.083839] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.539 [2024-12-08 05:10:20.099857] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.539 [2024-12-08 05:10:20.099928] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.539 [2024-12-08 05:10:20.115958] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.539 [2024-12-08 05:10:20.116029] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.539 [2024-12-08 05:10:20.134033] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.539 [2024-12-08 05:10:20.134105] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.539 [2024-12-08 05:10:20.149404] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.539 [2024-12-08 05:10:20.149476] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.539 [2024-12-08 05:10:20.167371] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.539 [2024-12-08 05:10:20.167439] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.539 [2024-12-08 05:10:20.182552] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.539 [2024-12-08 05:10:20.182618] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.539 [2024-12-08 05:10:20.192713] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.539 [2024-12-08 05:10:20.192771] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.539 [2024-12-08 05:10:20.204511] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.539 [2024-12-08 05:10:20.204566] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.539 [2024-12-08 05:10:20.215479] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.539 [2024-12-08 05:10:20.215536] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.539 [2024-12-08 05:10:20.226816] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.539 [2024-12-08 05:10:20.226874] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.539 [2024-12-08 05:10:20.241916] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.539 [2024-12-08 05:10:20.241980] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.539 [2024-12-08 05:10:20.252547] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.539 [2024-12-08 05:10:20.252607] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.539 [2024-12-08 05:10:20.264178] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.539 [2024-12-08 05:10:20.264241] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.539 [2024-12-08 05:10:20.275627] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.539 [2024-12-08 05:10:20.275714] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.539 [2024-12-08 05:10:20.287341] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.539 [2024-12-08 05:10:20.287404] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.539 [2024-12-08 05:10:20.298997] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.539 [2024-12-08 05:10:20.299060] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.539 [2024-12-08 05:10:20.313977] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.539 [2024-12-08 05:10:20.314047] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.801 [2024-12-08 05:10:20.329705] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.801 [2024-12-08 05:10:20.329776] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.801 [2024-12-08 05:10:20.347109] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.801 [2024-12-08 05:10:20.347177] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.801 [2024-12-08 05:10:20.363454] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.801 [2024-12-08 05:10:20.363522] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.801 [2024-12-08 05:10:20.380238] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.801 [2024-12-08 05:10:20.380306] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.801 [2024-12-08 05:10:20.396927] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.801 [2024-12-08 05:10:20.396996] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.801 [2024-12-08 05:10:20.407073] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.801 [2024-12-08 05:10:20.407133] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.801 [2024-12-08 05:10:20.419210] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.801 [2024-12-08 05:10:20.419274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.801 [2024-12-08 05:10:20.434329] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.801 [2024-12-08 05:10:20.434401] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.801 [2024-12-08 05:10:20.450196] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.801 [2024-12-08 05:10:20.450266] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.801 [2024-12-08 05:10:20.459658] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.801 [2024-12-08 05:10:20.459746] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.801 [2024-12-08 05:10:20.476162] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.801 [2024-12-08 05:10:20.476231] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.801 [2024-12-08 05:10:20.493206] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.801 [2024-12-08 05:10:20.493281] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.801 [2024-12-08 05:10:20.509905] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.801 [2024-12-08 05:10:20.509977] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.801 [2024-12-08 05:10:20.524971] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.801 [2024-12-08 05:10:20.525040] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.801 [2024-12-08 05:10:20.541164] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.801 [2024-12-08 05:10:20.541233] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.801 [2024-12-08 05:10:20.558071] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.801 [2024-12-08 05:10:20.558141] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.801 [2024-12-08 05:10:20.574470] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.801 [2024-12-08 05:10:20.574529] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.059 [2024-12-08 05:10:20.591664] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.059 [2024-12-08 05:10:20.591744] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.059 [2024-12-08 05:10:20.607856] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.059 [2024-12-08 05:10:20.607915] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.059 [2024-12-08 05:10:20.625623] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.059 [2024-12-08 05:10:20.625716] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.059 [2024-12-08 05:10:20.640602] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.059 [2024-12-08 05:10:20.640665] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.059 [2024-12-08 05:10:20.658230] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.059 [2024-12-08 05:10:20.658293] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.059 [2024-12-08 05:10:20.672793] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.059 [2024-12-08 05:10:20.672857] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.059 [2024-12-08 05:10:20.682319] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.059 [2024-12-08 05:10:20.682388] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.059 [2024-12-08 05:10:20.694610] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.059 [2024-12-08 05:10:20.694721] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.059 [2024-12-08 05:10:20.710301] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.059 [2024-12-08 05:10:20.710371] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.059 [2024-12-08 05:10:20.725412] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.059 [2024-12-08 05:10:20.725483] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.060 [2024-12-08 05:10:20.741246] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.060 [2024-12-08 05:10:20.741326] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.060 [2024-12-08 05:10:20.757255] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.060 [2024-12-08 05:10:20.757322] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.060 [2024-12-08 05:10:20.775591] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.060 [2024-12-08 05:10:20.775665] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.060 [2024-12-08 05:10:20.790578] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.060 [2024-12-08 05:10:20.790645] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.060 [2024-12-08 05:10:20.800406] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.060 [2024-12-08 05:10:20.800470] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.060 [2024-12-08 05:10:20.817218] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.060 [2024-12-08 05:10:20.817284] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.060 [2024-12-08 05:10:20.833786] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.060 [2024-12-08 05:10:20.833849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.317 [2024-12-08 05:10:20.852256] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.317 [2024-12-08 05:10:20.852323] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.317 [2024-12-08 05:10:20.866563] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.317 [2024-12-08 05:10:20.866624] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.317 [2024-12-08 05:10:20.883848] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.317 [2024-12-08 05:10:20.883912] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.317 [2024-12-08 05:10:20.898509] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.317 [2024-12-08 05:10:20.898572] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.317 [2024-12-08 05:10:20.914182] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.317 [2024-12-08 05:10:20.914253] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.317 [2024-12-08 05:10:20.929822] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.317 [2024-12-08 05:10:20.929890] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.317 [2024-12-08 05:10:20.946511] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.317 [2024-12-08 05:10:20.946578] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.317 [2024-12-08 05:10:20.963628] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.317 [2024-12-08 05:10:20.963706] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.317 [2024-12-08 05:10:20.979929] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.317 [2024-12-08 05:10:20.979984] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.317 [2024-12-08 05:10:20.995839] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.317 [2024-12-08 05:10:20.995902] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.317 [2024-12-08 05:10:21.014017] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.317 [2024-12-08 05:10:21.014083] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.317 [2024-12-08 05:10:21.029439] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.317 [2024-12-08 05:10:21.029502] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.317 [2024-12-08 05:10:21.046703] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.317 [2024-12-08 05:10:21.046775] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.317 [2024-12-08 05:10:21.062163] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.317 [2024-12-08 05:10:21.062222] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.317 [2024-12-08 05:10:21.072188] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.317 [2024-12-08 05:10:21.072255] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.317 [2024-12-08 05:10:21.087335] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.317 [2024-12-08 05:10:21.087400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.573 [2024-12-08 05:10:21.102307] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.573 [2024-12-08 05:10:21.102377] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.573 [2024-12-08 05:10:21.112359] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.574 [2024-12-08 05:10:21.112420] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.574 [2024-12-08 05:10:21.127490] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.574 [2024-12-08 05:10:21.127564] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.574 [2024-12-08 05:10:21.138813] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.574 [2024-12-08 05:10:21.138884] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.574 [2024-12-08 05:10:21.153989] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.574 [2024-12-08 05:10:21.154063] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.574 [2024-12-08 05:10:21.169057] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.574 [2024-12-08 05:10:21.169124] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.574 [2024-12-08 05:10:21.184728] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.574 [2024-12-08 05:10:21.184795] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.574 [2024-12-08 05:10:21.201883] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.574 [2024-12-08 05:10:21.201947] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.574 [2024-12-08 05:10:21.217607] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.574 [2024-12-08 05:10:21.217687] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.574 [2024-12-08 05:10:21.227871] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.574 [2024-12-08 05:10:21.227933] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.574 [2024-12-08 05:10:21.239802] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.574 [2024-12-08 05:10:21.239871] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.574 [2024-12-08 05:10:21.251083] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.574 [2024-12-08 05:10:21.251144] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.574 [2024-12-08 05:10:21.262623] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.574 [2024-12-08 05:10:21.262696] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.574 [2024-12-08 05:10:21.277688] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.574 [2024-12-08 05:10:21.277750] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.574 [2024-12-08 05:10:21.292584] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.574 [2024-12-08 05:10:21.292655] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.574 [2024-12-08 05:10:21.302982] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.574 [2024-12-08 05:10:21.303040] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.574 [2024-12-08 05:10:21.314285] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.574 [2024-12-08 05:10:21.314350] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.574 [2024-12-08 05:10:21.328801] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.574 [2024-12-08 05:10:21.328867] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.574 [2024-12-08 05:10:21.344812] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.574 [2024-12-08 05:10:21.344875] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.830 [2024-12-08 05:10:21.363934] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.830 [2024-12-08 05:10:21.364004] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.830 [2024-12-08 05:10:21.378508] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.830 [2024-12-08 05:10:21.378572] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.830 [2024-12-08 05:10:21.387739] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.830 [2024-12-08 05:10:21.387795] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.830 [2024-12-08 05:10:21.401168] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.830 [2024-12-08 05:10:21.401231] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.830 [2024-12-08 05:10:21.415483] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.830 [2024-12-08 05:10:21.415542] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.830 [2024-12-08 05:10:21.432284] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.830 [2024-12-08 05:10:21.432347] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.830 [2024-12-08 05:10:21.448568] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.830 [2024-12-08 05:10:21.448632] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.830 [2024-12-08 05:10:21.467616] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.831 [2024-12-08 05:10:21.467691] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.831 [2024-12-08 05:10:21.481726] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.831 [2024-12-08 05:10:21.481790] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.831 [2024-12-08 05:10:21.497521] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.831 [2024-12-08 05:10:21.497586] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.831 [2024-12-08 05:10:21.514869] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.831 [2024-12-08 05:10:21.514932] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.831 [2024-12-08 05:10:21.529547] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.831 [2024-12-08 05:10:21.529616] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.831 [2024-12-08 05:10:21.545138] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.831 [2024-12-08 05:10:21.545202] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.831 [2024-12-08 05:10:21.563422] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.831 [2024-12-08 05:10:21.563487] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.831 [2024-12-08 05:10:21.577635] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.831 [2024-12-08 05:10:21.577709] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.831 [2024-12-08 05:10:21.593598] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.831 [2024-12-08 05:10:21.593661] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.831 [2024-12-08 05:10:21.609435] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.831 [2024-12-08 05:10:21.609497] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.088 [2024-12-08 05:10:21.628338] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.088 [2024-12-08 05:10:21.628404] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.088 [2024-12-08 05:10:21.642578] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.088 [2024-12-08 05:10:21.642643] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.088 [2024-12-08 05:10:21.658336] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.088 [2024-12-08 05:10:21.658398] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.088 [2024-12-08 05:10:21.675932] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.088 [2024-12-08 05:10:21.675996] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.088 [2024-12-08 05:10:21.692889] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.088 [2024-12-08 05:10:21.692941] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.088 [2024-12-08 05:10:21.702851] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.088 [2024-12-08 05:10:21.702902] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.088 [2024-12-08 05:10:21.717142] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.088 [2024-12-08 05:10:21.717203] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.088 [2024-12-08 05:10:21.726695] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.088 [2024-12-08 05:10:21.726750] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.088 [2024-12-08 05:10:21.742403] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.088 [2024-12-08 05:10:21.742460] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.088 [2024-12-08 05:10:21.751447] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.088 [2024-12-08 05:10:21.751496] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.088 [2024-12-08 05:10:21.764074] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.088 [2024-12-08 05:10:21.764129] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.088 [2024-12-08 05:10:21.782048] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.088 [2024-12-08 05:10:21.782108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.088 [2024-12-08 05:10:21.797935] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.088 [2024-12-08 05:10:21.797989] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.088 [2024-12-08 05:10:21.816939] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.088 [2024-12-08 05:10:21.817000] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.088 [2024-12-08 05:10:21.831410] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.088 [2024-12-08 05:10:21.831473] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.088 [2024-12-08 05:10:21.841457] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.088 [2024-12-08 05:10:21.841514] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.088 [2024-12-08 05:10:21.852911] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.088 [2024-12-08 05:10:21.852968] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.088 [2024-12-08 05:10:21.869371] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.088 [2024-12-08 05:10:21.869431] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.346 [2024-12-08 05:10:21.879220] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.346 [2024-12-08 05:10:21.879280] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.346 [2024-12-08 05:10:21.892232] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.346 [2024-12-08 05:10:21.892293] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.346 [2024-12-08 05:10:21.907475] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.346 [2024-12-08 05:10:21.907542] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.346 [2024-12-08 05:10:21.924478] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.346 [2024-12-08 05:10:21.924541] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.346 [2024-12-08 05:10:21.934492] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.346 [2024-12-08 05:10:21.934548] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.346 [2024-12-08 05:10:21.950146] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.346 [2024-12-08 05:10:21.950222] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.346 [2024-12-08 05:10:21.965990] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.346 [2024-12-08 05:10:21.966056] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.346 [2024-12-08 05:10:21.981862] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.346 [2024-12-08 05:10:21.981927] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.346 [2024-12-08 05:10:21.998875] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.346 [2024-12-08 05:10:21.998937] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.346 [2024-12-08 05:10:22.015884] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.346 [2024-12-08 05:10:22.015949] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.346 [2024-12-08 05:10:22.032788] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.346 [2024-12-08 05:10:22.032846] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.346 [2024-12-08 05:10:22.049611] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.346 [2024-12-08 05:10:22.049687] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.346 [2024-12-08 05:10:22.065889] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.346 [2024-12-08 05:10:22.065939] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.346 [2024-12-08 05:10:22.085469] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.346 [2024-12-08 05:10:22.085543] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.346 [2024-12-08 05:10:22.098776] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.346 [2024-12-08 05:10:22.098859] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.346 [2024-12-08 05:10:22.115054] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.346 [2024-12-08 05:10:22.115123] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.346 [2024-12-08 05:10:22.129741] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.346 [2024-12-08 05:10:22.129808] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.604 [2024-12-08 05:10:22.147013] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.605 [2024-12-08 05:10:22.147080] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.605 [2024-12-08 05:10:22.163300] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.605 [2024-12-08 05:10:22.163364] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.605 [2024-12-08 05:10:22.182079] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.605 [2024-12-08 05:10:22.182139] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.605 [2024-12-08 05:10:22.196692] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.605 [2024-12-08 05:10:22.196755] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.605 [2024-12-08 05:10:22.212001] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.605 [2024-12-08 05:10:22.212067] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.605 [2024-12-08 05:10:22.229005] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.605 [2024-12-08 05:10:22.229068] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.605 [2024-12-08 05:10:22.245119] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.605 [2024-12-08 05:10:22.245179] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.605 [2024-12-08 05:10:22.262166] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.605 [2024-12-08 05:10:22.262232] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.605 [2024-12-08 05:10:22.277726] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.605 [2024-12-08 05:10:22.277787] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.605 [2024-12-08 05:10:22.297139] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.605 [2024-12-08 05:10:22.297203] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.605 [2024-12-08 05:10:22.311751] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.605 [2024-12-08 05:10:22.311812] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.605 [2024-12-08 05:10:22.329133] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.605 [2024-12-08 05:10:22.329213] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.605 [2024-12-08 05:10:22.344902] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.605 [2024-12-08 05:10:22.344964] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.605 [2024-12-08 05:10:22.354142] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.605 [2024-12-08 05:10:22.354201] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.605 [2024-12-08 05:10:22.369902] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.605 [2024-12-08 05:10:22.369965] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.605 [2024-12-08 05:10:22.379819] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.605 [2024-12-08 05:10:22.379877] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.862 [2024-12-08 05:10:22.394978] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.862 [2024-12-08 05:10:22.395042] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.862 [2024-12-08 05:10:22.412982] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.862 [2024-12-08 05:10:22.413077] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.862 [2024-12-08 05:10:22.428733] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.862 [2024-12-08 05:10:22.428808] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.862 [2024-12-08 05:10:22.439104] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.862 [2024-12-08 05:10:22.439190] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.862 [2024-12-08 05:10:22.452047] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.862 [2024-12-08 05:10:22.452112] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.862 [2024-12-08 05:10:22.467085] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.862 [2024-12-08 05:10:22.467151] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.862 [2024-12-08 05:10:22.483348] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.862 [2024-12-08 05:10:22.483414] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.862 [2024-12-08 05:10:22.501429] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.862 [2024-12-08 05:10:22.501496] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.862 [2024-12-08 05:10:22.516358] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.862 [2024-12-08 05:10:22.516425] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.862 [2024-12-08 05:10:22.525969] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.862 [2024-12-08 05:10:22.526073] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.862 [2024-12-08 05:10:22.543008] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.862 [2024-12-08 05:10:22.543076] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.862 [2024-12-08 05:10:22.559441] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.862 [2024-12-08 05:10:22.559526] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.862 [2024-12-08 05:10:22.578215] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.862 [2024-12-08 05:10:22.578282] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.862 [2024-12-08 05:10:22.593234] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.862 [2024-12-08 05:10:22.593312] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.862 [2024-12-08 05:10:22.604581] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.862 [2024-12-08 05:10:22.604657] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.862 [2024-12-08 05:10:22.616288] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.862 [2024-12-08 05:10:22.616360] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.863 [2024-12-08 05:10:22.631223] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.863 [2024-12-08 05:10:22.631333] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.120 [2024-12-08 05:10:22.649646] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.120 [2024-12-08 05:10:22.649759] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.120 [2024-12-08 05:10:22.661354] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.120 [2024-12-08 05:10:22.661446] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.120 [2024-12-08 05:10:22.675390] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.120 [2024-12-08 05:10:22.675466] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.120 [2024-12-08 05:10:22.686331] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.120 [2024-12-08 05:10:22.686406] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.120 [2024-12-08 05:10:22.701790] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.120 [2024-12-08 05:10:22.701861] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.120 [2024-12-08 05:10:22.718011] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.120 [2024-12-08 05:10:22.718082] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.120 [2024-12-08 05:10:22.737053] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.120 [2024-12-08 05:10:22.737122] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.120 [2024-12-08 05:10:22.752376] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.120 [2024-12-08 05:10:22.752445] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.120 [2024-12-08 05:10:22.762667] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.120 [2024-12-08 05:10:22.762751] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.120 [2024-12-08 05:10:22.778020] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.120 [2024-12-08 05:10:22.778087] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.120 [2024-12-08 05:10:22.795393] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.120 [2024-12-08 05:10:22.795481] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.120 [2024-12-08 05:10:22.813189] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.120 [2024-12-08 05:10:22.813258] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.120 [2024-12-08 05:10:22.830430] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.120 [2024-12-08 05:10:22.830506] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.120 [2024-12-08 05:10:22.846190] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.120 [2024-12-08 05:10:22.846256] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.120 [2024-12-08 05:10:22.856443] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.120 [2024-12-08 05:10:22.856506] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.120 [2024-12-08 05:10:22.872570] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.120 [2024-12-08 05:10:22.872633] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.120 [2024-12-08 05:10:22.889090] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.120 [2024-12-08 05:10:22.889172] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.120 [2024-12-08 05:10:22.902190] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.120 [2024-12-08 05:10:22.902266] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.378 [2024-12-08 05:10:22.918478] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.378 [2024-12-08 05:10:22.918578] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.378 [2024-12-08 05:10:22.929723] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.378 [2024-12-08 05:10:22.929799] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.378 [2024-12-08 05:10:22.945542] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.378 [2024-12-08 05:10:22.945641] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.378 [2024-12-08 05:10:22.961222] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.378 [2024-12-08 05:10:22.961298] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.378 [2024-12-08 05:10:22.970808] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.378 [2024-12-08 05:10:22.970878] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.379 [2024-12-08 05:10:22.984256] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.379 [2024-12-08 05:10:22.984328] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.379 [2024-12-08 05:10:22.999048] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.379 [2024-12-08 05:10:22.999122] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.379 [2024-12-08 05:10:23.009113] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.379 [2024-12-08 05:10:23.009176] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.379 [2024-12-08 05:10:23.024217] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.379 [2024-12-08 05:10:23.024285] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.379 [2024-12-08 05:10:23.042181] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.379 [2024-12-08 05:10:23.042275] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.379 [2024-12-08 05:10:23.059090] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.379 [2024-12-08 05:10:23.059188] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.379 [2024-12-08 05:10:23.074930] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.379 [2024-12-08 05:10:23.075026] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.379 [2024-12-08 05:10:23.092776] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.379 [2024-12-08 05:10:23.092833] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.379 [2024-12-08 05:10:23.109059] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.379 [2024-12-08 05:10:23.109148] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.379 [2024-12-08 05:10:23.126567] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.379 [2024-12-08 05:10:23.126651] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.379 [2024-12-08 05:10:23.140691] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.379 [2024-12-08 05:10:23.140779] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.379 00:11:33.379 Latency(us) 00:11:33.379 [2024-12-08T05:10:23.165Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:33.379 [2024-12-08T05:10:23.165Z] Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:11:33.379 Nvme1n1 : 5.01 11476.17 89.66 0.00 0.00 11138.34 3902.37 21448.15 00:11:33.379 [2024-12-08T05:10:23.165Z] =================================================================================================================== 00:11:33.379 [2024-12-08T05:10:23.165Z] Total : 11476.17 89.66 0.00 0.00 11138.34 3902.37 21448.15 00:11:33.379 [2024-12-08 05:10:23.150277] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.379 [2024-12-08 05:10:23.150345] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.379 [2024-12-08 05:10:23.162281] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.379 [2024-12-08 05:10:23.162354] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.637 [2024-12-08 05:10:23.174296] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.637 [2024-12-08 05:10:23.174371] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.637 [2024-12-08 05:10:23.186327] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.637 [2024-12-08 05:10:23.186432] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.637 [2024-12-08 05:10:23.198327] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.637 [2024-12-08 05:10:23.198416] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.637 [2024-12-08 05:10:23.210303] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.637 [2024-12-08 05:10:23.210388] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.637 [2024-12-08 05:10:23.222305] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.637 [2024-12-08 05:10:23.222381] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.637 [2024-12-08 05:10:23.234306] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.637 [2024-12-08 05:10:23.234372] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.637 [2024-12-08 05:10:23.246329] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.637 [2024-12-08 05:10:23.246408] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.637 [2024-12-08 05:10:23.258324] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.637 [2024-12-08 05:10:23.258392] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.637 [2024-12-08 05:10:23.270339] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.637 [2024-12-08 05:10:23.270431] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.637 [2024-12-08 05:10:23.282323] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.637 [2024-12-08 05:10:23.282406] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.637 [2024-12-08 05:10:23.294328] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.637 [2024-12-08 05:10:23.294395] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.637 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (75022) - No such process 00:11:33.637 05:10:23 -- target/zcopy.sh@49 -- # wait 75022 00:11:33.637 05:10:23 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:33.637 05:10:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.637 05:10:23 -- common/autotest_common.sh@10 -- # set +x 00:11:33.637 05:10:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.637 05:10:23 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:33.637 05:10:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.637 05:10:23 -- common/autotest_common.sh@10 -- # set +x 00:11:33.637 delay0 00:11:33.637 05:10:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.637 05:10:23 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:11:33.637 05:10:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.637 05:10:23 -- common/autotest_common.sh@10 -- # set +x 00:11:33.637 05:10:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.637 05:10:23 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:11:33.894 [2024-12-08 05:10:23.494989] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:40.452 Initializing NVMe Controllers 00:11:40.452 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:40.452 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:40.452 Initialization complete. Launching workers. 00:11:40.452 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 69 00:11:40.452 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 356, failed to submit 33 00:11:40.452 success 262, unsuccess 94, failed 0 00:11:40.452 05:10:29 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:11:40.452 05:10:29 -- target/zcopy.sh@60 -- # nvmftestfini 00:11:40.452 05:10:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:40.452 05:10:29 -- nvmf/common.sh@116 -- # sync 00:11:40.452 05:10:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:40.452 05:10:29 -- nvmf/common.sh@119 -- # set +e 00:11:40.452 05:10:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:40.452 05:10:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:40.452 rmmod nvme_tcp 00:11:40.452 rmmod nvme_fabrics 00:11:40.452 rmmod nvme_keyring 00:11:40.452 05:10:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:40.452 05:10:29 -- nvmf/common.sh@123 -- # set -e 00:11:40.452 05:10:29 -- nvmf/common.sh@124 -- # return 0 00:11:40.452 05:10:29 -- nvmf/common.sh@477 -- # '[' -n 74876 ']' 00:11:40.452 05:10:29 -- nvmf/common.sh@478 -- # killprocess 74876 00:11:40.452 05:10:29 -- common/autotest_common.sh@936 -- # '[' -z 74876 ']' 00:11:40.452 05:10:29 -- common/autotest_common.sh@940 -- # kill -0 74876 00:11:40.452 05:10:29 -- common/autotest_common.sh@941 -- # uname 00:11:40.452 05:10:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:40.452 05:10:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74876 00:11:40.452 05:10:29 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:40.452 05:10:29 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:40.452 killing process with pid 74876 00:11:40.452 05:10:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74876' 00:11:40.452 05:10:29 -- common/autotest_common.sh@955 -- # kill 74876 00:11:40.452 05:10:29 -- common/autotest_common.sh@960 -- # wait 74876 00:11:40.452 05:10:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:40.452 05:10:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:40.452 05:10:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:40.452 05:10:29 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:40.452 05:10:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:40.452 05:10:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:40.452 05:10:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:40.452 05:10:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:40.452 05:10:29 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:40.452 00:11:40.452 real 0m24.457s 00:11:40.452 user 0m39.958s 00:11:40.452 sys 0m6.514s 00:11:40.452 05:10:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:40.452 05:10:29 -- common/autotest_common.sh@10 -- # set +x 00:11:40.452 ************************************ 00:11:40.452 END TEST nvmf_zcopy 00:11:40.452 ************************************ 00:11:40.452 05:10:29 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:40.452 05:10:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:40.452 05:10:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:40.452 05:10:29 -- common/autotest_common.sh@10 -- # set +x 00:11:40.452 ************************************ 00:11:40.452 START TEST nvmf_nmic 00:11:40.452 ************************************ 00:11:40.452 05:10:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:40.452 * Looking for test storage... 00:11:40.452 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:40.452 05:10:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:40.452 05:10:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:40.452 05:10:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:40.452 05:10:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:40.452 05:10:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:40.452 05:10:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:40.452 05:10:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:40.452 05:10:30 -- scripts/common.sh@335 -- # IFS=.-: 00:11:40.452 05:10:30 -- scripts/common.sh@335 -- # read -ra ver1 00:11:40.452 05:10:30 -- scripts/common.sh@336 -- # IFS=.-: 00:11:40.452 05:10:30 -- scripts/common.sh@336 -- # read -ra ver2 00:11:40.452 05:10:30 -- scripts/common.sh@337 -- # local 'op=<' 00:11:40.452 05:10:30 -- scripts/common.sh@339 -- # ver1_l=2 00:11:40.452 05:10:30 -- scripts/common.sh@340 -- # ver2_l=1 00:11:40.452 05:10:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:40.452 05:10:30 -- scripts/common.sh@343 -- # case "$op" in 00:11:40.452 05:10:30 -- scripts/common.sh@344 -- # : 1 00:11:40.452 05:10:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:40.452 05:10:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:40.452 05:10:30 -- scripts/common.sh@364 -- # decimal 1 00:11:40.452 05:10:30 -- scripts/common.sh@352 -- # local d=1 00:11:40.452 05:10:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:40.452 05:10:30 -- scripts/common.sh@354 -- # echo 1 00:11:40.452 05:10:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:40.452 05:10:30 -- scripts/common.sh@365 -- # decimal 2 00:11:40.452 05:10:30 -- scripts/common.sh@352 -- # local d=2 00:11:40.452 05:10:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:40.452 05:10:30 -- scripts/common.sh@354 -- # echo 2 00:11:40.452 05:10:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:40.452 05:10:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:40.452 05:10:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:40.452 05:10:30 -- scripts/common.sh@367 -- # return 0 00:11:40.452 05:10:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:40.452 05:10:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:40.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.452 --rc genhtml_branch_coverage=1 00:11:40.452 --rc genhtml_function_coverage=1 00:11:40.452 --rc genhtml_legend=1 00:11:40.452 --rc geninfo_all_blocks=1 00:11:40.452 --rc geninfo_unexecuted_blocks=1 00:11:40.452 00:11:40.452 ' 00:11:40.452 05:10:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:40.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.452 --rc genhtml_branch_coverage=1 00:11:40.452 --rc genhtml_function_coverage=1 00:11:40.452 --rc genhtml_legend=1 00:11:40.453 --rc geninfo_all_blocks=1 00:11:40.453 --rc geninfo_unexecuted_blocks=1 00:11:40.453 00:11:40.453 ' 00:11:40.453 05:10:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:40.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.453 --rc genhtml_branch_coverage=1 00:11:40.453 --rc genhtml_function_coverage=1 00:11:40.453 --rc genhtml_legend=1 00:11:40.453 --rc geninfo_all_blocks=1 00:11:40.453 --rc geninfo_unexecuted_blocks=1 00:11:40.453 00:11:40.453 ' 00:11:40.453 05:10:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:40.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.453 --rc genhtml_branch_coverage=1 00:11:40.453 --rc genhtml_function_coverage=1 00:11:40.453 --rc genhtml_legend=1 00:11:40.453 --rc geninfo_all_blocks=1 00:11:40.453 --rc geninfo_unexecuted_blocks=1 00:11:40.453 00:11:40.453 ' 00:11:40.453 05:10:30 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:40.453 05:10:30 -- nvmf/common.sh@7 -- # uname -s 00:11:40.453 05:10:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:40.453 05:10:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:40.453 05:10:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:40.453 05:10:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:40.453 05:10:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:40.453 05:10:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:40.453 05:10:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:40.453 05:10:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:40.453 05:10:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:40.453 05:10:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:40.453 05:10:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:11:40.453 05:10:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:11:40.453 05:10:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:40.453 05:10:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:40.453 05:10:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:40.453 05:10:30 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:40.453 05:10:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:40.453 05:10:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:40.453 05:10:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:40.453 05:10:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.453 05:10:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.453 05:10:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.453 05:10:30 -- paths/export.sh@5 -- # export PATH 00:11:40.453 05:10:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.453 05:10:30 -- nvmf/common.sh@46 -- # : 0 00:11:40.453 05:10:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:40.453 05:10:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:40.453 05:10:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:40.453 05:10:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:40.453 05:10:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:40.453 05:10:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:40.453 05:10:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:40.453 05:10:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:40.453 05:10:30 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:40.453 05:10:30 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:40.453 05:10:30 -- target/nmic.sh@14 -- # nvmftestinit 00:11:40.453 05:10:30 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:40.453 05:10:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:40.453 05:10:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:40.453 05:10:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:40.453 05:10:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:40.453 05:10:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:40.453 05:10:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:40.453 05:10:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:40.453 05:10:30 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:40.453 05:10:30 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:40.453 05:10:30 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:40.453 05:10:30 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:40.453 05:10:30 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:40.453 05:10:30 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:40.453 05:10:30 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:40.453 05:10:30 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:40.453 05:10:30 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:40.453 05:10:30 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:40.453 05:10:30 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:40.453 05:10:30 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:40.453 05:10:30 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:40.453 05:10:30 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:40.453 05:10:30 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:40.453 05:10:30 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:40.453 05:10:30 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:40.453 05:10:30 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:40.453 05:10:30 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:40.453 05:10:30 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:40.453 Cannot find device "nvmf_tgt_br" 00:11:40.453 05:10:30 -- nvmf/common.sh@154 -- # true 00:11:40.453 05:10:30 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:40.453 Cannot find device "nvmf_tgt_br2" 00:11:40.453 05:10:30 -- nvmf/common.sh@155 -- # true 00:11:40.453 05:10:30 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:40.453 05:10:30 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:40.453 Cannot find device "nvmf_tgt_br" 00:11:40.453 05:10:30 -- nvmf/common.sh@157 -- # true 00:11:40.453 05:10:30 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:40.453 Cannot find device "nvmf_tgt_br2" 00:11:40.453 05:10:30 -- nvmf/common.sh@158 -- # true 00:11:40.453 05:10:30 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:40.453 05:10:30 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:40.453 05:10:30 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:40.453 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:40.453 05:10:30 -- nvmf/common.sh@161 -- # true 00:11:40.453 05:10:30 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:40.453 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:40.453 05:10:30 -- nvmf/common.sh@162 -- # true 00:11:40.453 05:10:30 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:40.453 05:10:30 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:40.711 05:10:30 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:40.711 05:10:30 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:40.711 05:10:30 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:40.711 05:10:30 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:40.711 05:10:30 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:40.711 05:10:30 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:40.711 05:10:30 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:40.711 05:10:30 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:40.711 05:10:30 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:40.711 05:10:30 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:40.711 05:10:30 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:40.711 05:10:30 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:40.711 05:10:30 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:40.711 05:10:30 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:40.711 05:10:30 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:40.711 05:10:30 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:40.711 05:10:30 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:40.711 05:10:30 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:40.711 05:10:30 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:40.711 05:10:30 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:40.711 05:10:30 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:40.711 05:10:30 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:40.711 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:40.711 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:11:40.711 00:11:40.711 --- 10.0.0.2 ping statistics --- 00:11:40.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:40.711 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:11:40.711 05:10:30 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:40.711 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:40.711 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:11:40.711 00:11:40.711 --- 10.0.0.3 ping statistics --- 00:11:40.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:40.711 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:11:40.711 05:10:30 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:40.711 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:40.711 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:11:40.711 00:11:40.711 --- 10.0.0.1 ping statistics --- 00:11:40.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:40.711 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:11:40.711 05:10:30 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:40.711 05:10:30 -- nvmf/common.sh@421 -- # return 0 00:11:40.711 05:10:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:40.711 05:10:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:40.711 05:10:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:40.711 05:10:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:40.711 05:10:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:40.711 05:10:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:40.711 05:10:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:40.711 05:10:30 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:40.711 05:10:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:40.711 05:10:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:40.711 05:10:30 -- common/autotest_common.sh@10 -- # set +x 00:11:40.711 05:10:30 -- nvmf/common.sh@469 -- # nvmfpid=75351 00:11:40.711 05:10:30 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:40.711 05:10:30 -- nvmf/common.sh@470 -- # waitforlisten 75351 00:11:40.711 05:10:30 -- common/autotest_common.sh@829 -- # '[' -z 75351 ']' 00:11:40.711 05:10:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:40.711 05:10:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:40.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:40.711 05:10:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:40.711 05:10:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:40.711 05:10:30 -- common/autotest_common.sh@10 -- # set +x 00:11:40.975 [2024-12-08 05:10:30.511006] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:40.975 [2024-12-08 05:10:30.511135] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:40.975 [2024-12-08 05:10:30.663320] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:40.975 [2024-12-08 05:10:30.699530] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:40.975 [2024-12-08 05:10:30.699937] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:40.975 [2024-12-08 05:10:30.700034] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:40.975 [2024-12-08 05:10:30.700135] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:40.975 [2024-12-08 05:10:30.700276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:40.975 [2024-12-08 05:10:30.700395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:40.975 [2024-12-08 05:10:30.700932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:40.975 [2024-12-08 05:10:30.700938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.232 05:10:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:41.232 05:10:30 -- common/autotest_common.sh@862 -- # return 0 00:11:41.232 05:10:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:41.232 05:10:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:41.232 05:10:30 -- common/autotest_common.sh@10 -- # set +x 00:11:41.232 05:10:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:41.232 05:10:30 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:41.232 05:10:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.232 05:10:30 -- common/autotest_common.sh@10 -- # set +x 00:11:41.232 [2024-12-08 05:10:30.850940] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:41.232 05:10:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.232 05:10:30 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:41.232 05:10:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.232 05:10:30 -- common/autotest_common.sh@10 -- # set +x 00:11:41.232 Malloc0 00:11:41.232 05:10:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.232 05:10:30 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:41.232 05:10:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.232 05:10:30 -- common/autotest_common.sh@10 -- # set +x 00:11:41.233 05:10:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.233 05:10:30 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:41.233 05:10:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.233 05:10:30 -- common/autotest_common.sh@10 -- # set +x 00:11:41.233 05:10:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.233 05:10:30 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:41.233 05:10:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.233 05:10:30 -- common/autotest_common.sh@10 -- # set +x 00:11:41.233 [2024-12-08 05:10:30.910174] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:41.233 05:10:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.233 05:10:30 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:41.233 test case1: single bdev can't be used in multiple subsystems 00:11:41.233 05:10:30 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:41.233 05:10:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.233 05:10:30 -- common/autotest_common.sh@10 -- # set +x 00:11:41.233 05:10:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.233 05:10:30 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:41.233 05:10:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.233 05:10:30 -- common/autotest_common.sh@10 -- # set +x 00:11:41.233 05:10:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.233 05:10:30 -- target/nmic.sh@28 -- # nmic_status=0 00:11:41.233 05:10:30 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:41.233 05:10:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.233 05:10:30 -- common/autotest_common.sh@10 -- # set +x 00:11:41.233 [2024-12-08 05:10:30.933963] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:41.233 [2024-12-08 05:10:30.934202] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:41.233 [2024-12-08 05:10:30.934299] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.233 request: 00:11:41.233 { 00:11:41.233 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:41.233 "namespace": { 00:11:41.233 "bdev_name": "Malloc0" 00:11:41.233 }, 00:11:41.233 "method": "nvmf_subsystem_add_ns", 00:11:41.233 "req_id": 1 00:11:41.233 } 00:11:41.233 Got JSON-RPC error response 00:11:41.233 response: 00:11:41.233 { 00:11:41.233 "code": -32602, 00:11:41.233 "message": "Invalid parameters" 00:11:41.233 } 00:11:41.233 05:10:30 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:41.233 05:10:30 -- target/nmic.sh@29 -- # nmic_status=1 00:11:41.233 05:10:30 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:41.233 Adding namespace failed - expected result. 00:11:41.233 05:10:30 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:41.233 test case2: host connect to nvmf target in multiple paths 00:11:41.233 05:10:30 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:41.233 05:10:30 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:11:41.233 05:10:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.233 05:10:30 -- common/autotest_common.sh@10 -- # set +x 00:11:41.233 [2024-12-08 05:10:30.946152] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:11:41.233 05:10:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.233 05:10:30 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 --hostid=bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:41.490 05:10:31 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 --hostid=bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:11:41.491 05:10:31 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:41.491 05:10:31 -- common/autotest_common.sh@1187 -- # local i=0 00:11:41.491 05:10:31 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:41.491 05:10:31 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:41.491 05:10:31 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:43.431 05:10:33 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:43.690 05:10:33 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:43.690 05:10:33 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:11:43.690 05:10:33 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:11:43.690 05:10:33 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:43.690 05:10:33 -- common/autotest_common.sh@1197 -- # return 0 00:11:43.690 05:10:33 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:43.690 [global] 00:11:43.690 thread=1 00:11:43.690 invalidate=1 00:11:43.690 rw=write 00:11:43.690 time_based=1 00:11:43.690 runtime=1 00:11:43.690 ioengine=libaio 00:11:43.690 direct=1 00:11:43.690 bs=4096 00:11:43.690 iodepth=1 00:11:43.690 norandommap=0 00:11:43.690 numjobs=1 00:11:43.690 00:11:43.690 verify_dump=1 00:11:43.690 verify_backlog=512 00:11:43.690 verify_state_save=0 00:11:43.690 do_verify=1 00:11:43.690 verify=crc32c-intel 00:11:43.690 [job0] 00:11:43.690 filename=/dev/nvme0n1 00:11:43.690 Could not set queue depth (nvme0n1) 00:11:43.690 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:43.690 fio-3.35 00:11:43.690 Starting 1 thread 00:11:45.065 00:11:45.065 job0: (groupid=0, jobs=1): err= 0: pid=75435: Sun Dec 8 05:10:34 2024 00:11:45.065 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:45.065 slat (usec): min=13, max=195, avg=27.54, stdev=11.05 00:11:45.065 clat (usec): min=151, max=19860, avg=338.98, stdev=923.48 00:11:45.065 lat (usec): min=176, max=19962, avg=366.51, stdev=927.03 00:11:45.065 clat percentiles (usec): 00:11:45.065 | 1.00th=[ 176], 5.00th=[ 200], 10.00th=[ 212], 20.00th=[ 225], 00:11:45.065 | 30.00th=[ 233], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 258], 00:11:45.065 | 70.00th=[ 265], 80.00th=[ 281], 90.00th=[ 297], 95.00th=[ 318], 00:11:45.065 | 99.00th=[ 3261], 99.50th=[ 5342], 99.90th=[16057], 99.95th=[19792], 00:11:45.065 | 99.99th=[19792] 00:11:45.065 write: IOPS=1837, BW=7349KiB/s (7525kB/s)(7356KiB/1001msec); 0 zone resets 00:11:45.065 slat (usec): min=23, max=164, avg=37.30, stdev= 9.23 00:11:45.065 clat (usec): min=58, max=20410, avg=194.68, stdev=648.54 00:11:45.065 lat (usec): min=129, max=20447, avg=231.98, stdev=649.33 00:11:45.065 clat percentiles (usec): 00:11:45.065 | 1.00th=[ 102], 5.00th=[ 120], 10.00th=[ 128], 20.00th=[ 137], 00:11:45.065 | 30.00th=[ 143], 40.00th=[ 149], 50.00th=[ 155], 60.00th=[ 163], 00:11:45.065 | 70.00th=[ 172], 80.00th=[ 182], 90.00th=[ 196], 95.00th=[ 215], 00:11:45.065 | 99.00th=[ 306], 99.50th=[ 693], 99.90th=[12256], 99.95th=[20317], 00:11:45.065 | 99.99th=[20317] 00:11:45.065 bw ( KiB/s): min= 8143, max= 8143, per=100.00%, avg=8143.00, stdev= 0.00, samples=1 00:11:45.065 iops : min= 2035, max= 2035, avg=2035.00, stdev= 0.00, samples=1 00:11:45.065 lat (usec) : 100=0.33%, 250=77.75%, 500=20.68%, 750=0.15%, 1000=0.12% 00:11:45.065 lat (msec) : 2=0.18%, 4=0.33%, 10=0.27%, 20=0.18%, 50=0.03% 00:11:45.065 cpu : usr=2.30%, sys=8.40%, ctx=3378, majf=0, minf=5 00:11:45.065 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:45.065 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.065 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.065 issued rwts: total=1536,1839,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:45.065 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:45.065 00:11:45.065 Run status group 0 (all jobs): 00:11:45.065 READ: bw=6138KiB/s (6285kB/s), 6138KiB/s-6138KiB/s (6285kB/s-6285kB/s), io=6144KiB (6291kB), run=1001-1001msec 00:11:45.065 WRITE: bw=7349KiB/s (7525kB/s), 7349KiB/s-7349KiB/s (7525kB/s-7525kB/s), io=7356KiB (7533kB), run=1001-1001msec 00:11:45.065 00:11:45.065 Disk stats (read/write): 00:11:45.065 nvme0n1: ios=1406/1536, merge=0/0, ticks=513/330, in_queue=843, util=90.31% 00:11:45.065 05:10:34 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:45.065 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:45.065 05:10:34 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:45.065 05:10:34 -- common/autotest_common.sh@1208 -- # local i=0 00:11:45.065 05:10:34 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:45.065 05:10:34 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:45.065 05:10:34 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:45.065 05:10:34 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:45.065 05:10:34 -- common/autotest_common.sh@1220 -- # return 0 00:11:45.065 05:10:34 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:45.065 05:10:34 -- target/nmic.sh@53 -- # nvmftestfini 00:11:45.065 05:10:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:45.065 05:10:34 -- nvmf/common.sh@116 -- # sync 00:11:45.065 05:10:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:45.065 05:10:34 -- nvmf/common.sh@119 -- # set +e 00:11:45.065 05:10:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:45.065 05:10:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:45.065 rmmod nvme_tcp 00:11:45.065 rmmod nvme_fabrics 00:11:45.065 rmmod nvme_keyring 00:11:45.065 05:10:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:45.065 05:10:34 -- nvmf/common.sh@123 -- # set -e 00:11:45.065 05:10:34 -- nvmf/common.sh@124 -- # return 0 00:11:45.065 05:10:34 -- nvmf/common.sh@477 -- # '[' -n 75351 ']' 00:11:45.065 05:10:34 -- nvmf/common.sh@478 -- # killprocess 75351 00:11:45.065 05:10:34 -- common/autotest_common.sh@936 -- # '[' -z 75351 ']' 00:11:45.065 05:10:34 -- common/autotest_common.sh@940 -- # kill -0 75351 00:11:45.065 05:10:34 -- common/autotest_common.sh@941 -- # uname 00:11:45.065 05:10:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:45.065 05:10:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75351 00:11:45.065 05:10:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:45.065 killing process with pid 75351 00:11:45.065 05:10:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:45.065 05:10:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75351' 00:11:45.065 05:10:34 -- common/autotest_common.sh@955 -- # kill 75351 00:11:45.065 05:10:34 -- common/autotest_common.sh@960 -- # wait 75351 00:11:45.324 05:10:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:45.324 05:10:34 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:45.324 05:10:34 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:45.324 05:10:34 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:45.324 05:10:34 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:45.324 05:10:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:45.324 05:10:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:45.324 05:10:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:45.324 05:10:34 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:45.324 00:11:45.324 real 0m5.082s 00:11:45.324 user 0m15.012s 00:11:45.324 sys 0m2.451s 00:11:45.324 05:10:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:45.324 05:10:34 -- common/autotest_common.sh@10 -- # set +x 00:11:45.324 ************************************ 00:11:45.324 END TEST nvmf_nmic 00:11:45.324 ************************************ 00:11:45.324 05:10:35 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:45.324 05:10:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:45.324 05:10:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:45.324 05:10:35 -- common/autotest_common.sh@10 -- # set +x 00:11:45.324 ************************************ 00:11:45.324 START TEST nvmf_fio_target 00:11:45.324 ************************************ 00:11:45.325 05:10:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:45.582 * Looking for test storage... 00:11:45.582 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:45.582 05:10:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:45.582 05:10:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:45.583 05:10:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:45.583 05:10:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:45.583 05:10:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:45.583 05:10:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:45.583 05:10:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:45.583 05:10:35 -- scripts/common.sh@335 -- # IFS=.-: 00:11:45.583 05:10:35 -- scripts/common.sh@335 -- # read -ra ver1 00:11:45.583 05:10:35 -- scripts/common.sh@336 -- # IFS=.-: 00:11:45.583 05:10:35 -- scripts/common.sh@336 -- # read -ra ver2 00:11:45.583 05:10:35 -- scripts/common.sh@337 -- # local 'op=<' 00:11:45.583 05:10:35 -- scripts/common.sh@339 -- # ver1_l=2 00:11:45.583 05:10:35 -- scripts/common.sh@340 -- # ver2_l=1 00:11:45.583 05:10:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:45.583 05:10:35 -- scripts/common.sh@343 -- # case "$op" in 00:11:45.583 05:10:35 -- scripts/common.sh@344 -- # : 1 00:11:45.583 05:10:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:45.583 05:10:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:45.583 05:10:35 -- scripts/common.sh@364 -- # decimal 1 00:11:45.583 05:10:35 -- scripts/common.sh@352 -- # local d=1 00:11:45.583 05:10:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:45.583 05:10:35 -- scripts/common.sh@354 -- # echo 1 00:11:45.583 05:10:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:45.583 05:10:35 -- scripts/common.sh@365 -- # decimal 2 00:11:45.583 05:10:35 -- scripts/common.sh@352 -- # local d=2 00:11:45.583 05:10:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:45.583 05:10:35 -- scripts/common.sh@354 -- # echo 2 00:11:45.583 05:10:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:45.583 05:10:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:45.583 05:10:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:45.583 05:10:35 -- scripts/common.sh@367 -- # return 0 00:11:45.583 05:10:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:45.583 05:10:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:45.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.583 --rc genhtml_branch_coverage=1 00:11:45.583 --rc genhtml_function_coverage=1 00:11:45.583 --rc genhtml_legend=1 00:11:45.583 --rc geninfo_all_blocks=1 00:11:45.583 --rc geninfo_unexecuted_blocks=1 00:11:45.583 00:11:45.583 ' 00:11:45.583 05:10:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:45.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.583 --rc genhtml_branch_coverage=1 00:11:45.583 --rc genhtml_function_coverage=1 00:11:45.583 --rc genhtml_legend=1 00:11:45.583 --rc geninfo_all_blocks=1 00:11:45.583 --rc geninfo_unexecuted_blocks=1 00:11:45.583 00:11:45.583 ' 00:11:45.583 05:10:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:45.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.583 --rc genhtml_branch_coverage=1 00:11:45.583 --rc genhtml_function_coverage=1 00:11:45.583 --rc genhtml_legend=1 00:11:45.583 --rc geninfo_all_blocks=1 00:11:45.583 --rc geninfo_unexecuted_blocks=1 00:11:45.583 00:11:45.583 ' 00:11:45.583 05:10:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:45.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.583 --rc genhtml_branch_coverage=1 00:11:45.583 --rc genhtml_function_coverage=1 00:11:45.583 --rc genhtml_legend=1 00:11:45.583 --rc geninfo_all_blocks=1 00:11:45.583 --rc geninfo_unexecuted_blocks=1 00:11:45.583 00:11:45.583 ' 00:11:45.583 05:10:35 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:45.583 05:10:35 -- nvmf/common.sh@7 -- # uname -s 00:11:45.583 05:10:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:45.583 05:10:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:45.583 05:10:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:45.583 05:10:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:45.583 05:10:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:45.583 05:10:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:45.583 05:10:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:45.583 05:10:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:45.583 05:10:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:45.583 05:10:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:45.583 05:10:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:11:45.583 05:10:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:11:45.583 05:10:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:45.583 05:10:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:45.583 05:10:35 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:45.583 05:10:35 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:45.583 05:10:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:45.583 05:10:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:45.583 05:10:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:45.583 05:10:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.583 05:10:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.583 05:10:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.583 05:10:35 -- paths/export.sh@5 -- # export PATH 00:11:45.583 05:10:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.583 05:10:35 -- nvmf/common.sh@46 -- # : 0 00:11:45.583 05:10:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:45.583 05:10:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:45.583 05:10:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:45.583 05:10:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:45.583 05:10:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:45.583 05:10:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:45.583 05:10:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:45.583 05:10:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:45.583 05:10:35 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:45.583 05:10:35 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:45.583 05:10:35 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:45.583 05:10:35 -- target/fio.sh@16 -- # nvmftestinit 00:11:45.583 05:10:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:45.583 05:10:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:45.583 05:10:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:45.583 05:10:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:45.583 05:10:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:45.583 05:10:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:45.583 05:10:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:45.583 05:10:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:45.583 05:10:35 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:45.583 05:10:35 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:45.583 05:10:35 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:45.583 05:10:35 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:45.583 05:10:35 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:45.583 05:10:35 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:45.583 05:10:35 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:45.583 05:10:35 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:45.583 05:10:35 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:45.583 05:10:35 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:45.583 05:10:35 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:45.583 05:10:35 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:45.583 05:10:35 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:45.583 05:10:35 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:45.583 05:10:35 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:45.583 05:10:35 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:45.583 05:10:35 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:45.583 05:10:35 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:45.583 05:10:35 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:45.583 05:10:35 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:45.583 Cannot find device "nvmf_tgt_br" 00:11:45.583 05:10:35 -- nvmf/common.sh@154 -- # true 00:11:45.583 05:10:35 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:45.583 Cannot find device "nvmf_tgt_br2" 00:11:45.583 05:10:35 -- nvmf/common.sh@155 -- # true 00:11:45.583 05:10:35 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:45.583 05:10:35 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:45.841 Cannot find device "nvmf_tgt_br" 00:11:45.841 05:10:35 -- nvmf/common.sh@157 -- # true 00:11:45.841 05:10:35 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:45.841 Cannot find device "nvmf_tgt_br2" 00:11:45.841 05:10:35 -- nvmf/common.sh@158 -- # true 00:11:45.841 05:10:35 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:45.841 05:10:35 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:45.841 05:10:35 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:45.841 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:45.841 05:10:35 -- nvmf/common.sh@161 -- # true 00:11:45.841 05:10:35 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:45.841 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:45.841 05:10:35 -- nvmf/common.sh@162 -- # true 00:11:45.841 05:10:35 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:45.841 05:10:35 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:45.841 05:10:35 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:45.841 05:10:35 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:45.841 05:10:35 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:45.841 05:10:35 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:45.841 05:10:35 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:45.841 05:10:35 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:45.841 05:10:35 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:45.841 05:10:35 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:45.841 05:10:35 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:45.841 05:10:35 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:45.841 05:10:35 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:45.841 05:10:35 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:45.841 05:10:35 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:45.841 05:10:35 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:45.841 05:10:35 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:45.841 05:10:35 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:45.841 05:10:35 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:45.841 05:10:35 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:45.841 05:10:35 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:45.841 05:10:35 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:45.841 05:10:35 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:45.841 05:10:35 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:46.099 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:46.099 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:11:46.099 00:11:46.099 --- 10.0.0.2 ping statistics --- 00:11:46.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:46.099 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:11:46.099 05:10:35 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:46.099 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:46.099 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:11:46.099 00:11:46.099 --- 10.0.0.3 ping statistics --- 00:11:46.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:46.099 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:11:46.099 05:10:35 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:46.099 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:46.099 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:11:46.099 00:11:46.099 --- 10.0.0.1 ping statistics --- 00:11:46.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:46.099 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:11:46.099 05:10:35 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:46.099 05:10:35 -- nvmf/common.sh@421 -- # return 0 00:11:46.099 05:10:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:46.099 05:10:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:46.099 05:10:35 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:46.099 05:10:35 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:46.099 05:10:35 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:46.099 05:10:35 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:46.099 05:10:35 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:46.099 05:10:35 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:46.099 05:10:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:46.099 05:10:35 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:46.099 05:10:35 -- common/autotest_common.sh@10 -- # set +x 00:11:46.099 05:10:35 -- nvmf/common.sh@469 -- # nvmfpid=75619 00:11:46.099 05:10:35 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:46.099 05:10:35 -- nvmf/common.sh@470 -- # waitforlisten 75619 00:11:46.099 05:10:35 -- common/autotest_common.sh@829 -- # '[' -z 75619 ']' 00:11:46.099 05:10:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:46.099 05:10:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:46.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:46.099 05:10:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:46.099 05:10:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:46.099 05:10:35 -- common/autotest_common.sh@10 -- # set +x 00:11:46.099 [2024-12-08 05:10:35.737026] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:46.099 [2024-12-08 05:10:35.737162] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:46.358 [2024-12-08 05:10:35.887916] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:46.358 [2024-12-08 05:10:35.924011] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:46.358 [2024-12-08 05:10:35.924168] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:46.358 [2024-12-08 05:10:35.924182] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:46.358 [2024-12-08 05:10:35.924191] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:46.358 [2024-12-08 05:10:35.924306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:46.358 [2024-12-08 05:10:35.924361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:46.358 [2024-12-08 05:10:35.924841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:46.358 [2024-12-08 05:10:35.924855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.358 05:10:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:46.358 05:10:36 -- common/autotest_common.sh@862 -- # return 0 00:11:46.358 05:10:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:46.358 05:10:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:46.358 05:10:36 -- common/autotest_common.sh@10 -- # set +x 00:11:46.358 05:10:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:46.358 05:10:36 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:46.925 [2024-12-08 05:10:36.490691] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:46.925 05:10:36 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:47.184 05:10:36 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:47.184 05:10:36 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:47.442 05:10:37 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:47.442 05:10:37 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:47.699 05:10:37 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:47.699 05:10:37 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:48.264 05:10:37 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:48.264 05:10:37 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:48.521 05:10:38 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:48.779 05:10:38 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:48.780 05:10:38 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:49.039 05:10:38 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:49.039 05:10:38 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:49.299 05:10:38 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:49.299 05:10:38 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:49.897 05:10:39 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:50.154 05:10:39 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:50.154 05:10:39 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:50.718 05:10:40 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:50.718 05:10:40 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:51.282 05:10:40 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:51.541 [2024-12-08 05:10:41.193282] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:51.541 05:10:41 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:52.107 05:10:41 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:52.674 05:10:42 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 --hostid=bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:52.674 05:10:42 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:52.674 05:10:42 -- common/autotest_common.sh@1187 -- # local i=0 00:11:52.674 05:10:42 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:52.674 05:10:42 -- common/autotest_common.sh@1189 -- # [[ -n 4 ]] 00:11:52.674 05:10:42 -- common/autotest_common.sh@1190 -- # nvme_device_counter=4 00:11:52.674 05:10:42 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:54.576 05:10:44 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:54.576 05:10:44 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:11:54.576 05:10:44 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:54.835 05:10:44 -- common/autotest_common.sh@1196 -- # nvme_devices=4 00:11:54.835 05:10:44 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:54.835 05:10:44 -- common/autotest_common.sh@1197 -- # return 0 00:11:54.835 05:10:44 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:54.835 [global] 00:11:54.835 thread=1 00:11:54.835 invalidate=1 00:11:54.835 rw=write 00:11:54.835 time_based=1 00:11:54.835 runtime=1 00:11:54.835 ioengine=libaio 00:11:54.835 direct=1 00:11:54.835 bs=4096 00:11:54.835 iodepth=1 00:11:54.835 norandommap=0 00:11:54.835 numjobs=1 00:11:54.835 00:11:54.835 verify_dump=1 00:11:54.835 verify_backlog=512 00:11:54.835 verify_state_save=0 00:11:54.835 do_verify=1 00:11:54.835 verify=crc32c-intel 00:11:54.835 [job0] 00:11:54.835 filename=/dev/nvme0n1 00:11:54.835 [job1] 00:11:54.835 filename=/dev/nvme0n2 00:11:54.835 [job2] 00:11:54.835 filename=/dev/nvme0n3 00:11:54.835 [job3] 00:11:54.835 filename=/dev/nvme0n4 00:11:54.835 Could not set queue depth (nvme0n1) 00:11:54.835 Could not set queue depth (nvme0n2) 00:11:54.835 Could not set queue depth (nvme0n3) 00:11:54.835 Could not set queue depth (nvme0n4) 00:11:55.093 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:55.093 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:55.093 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:55.093 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:55.093 fio-3.35 00:11:55.093 Starting 4 threads 00:11:56.470 00:11:56.470 job0: (groupid=0, jobs=1): err= 0: pid=75816: Sun Dec 8 05:10:45 2024 00:11:56.470 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:11:56.470 slat (usec): min=10, max=295, avg=28.17, stdev=14.41 00:11:56.470 clat (usec): min=3, max=19478, avg=387.46, stdev=1167.54 00:11:56.470 lat (usec): min=167, max=19504, avg=415.62, stdev=1167.79 00:11:56.470 clat percentiles (usec): 00:11:56.470 | 1.00th=[ 169], 5.00th=[ 198], 10.00th=[ 206], 20.00th=[ 215], 00:11:56.470 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 229], 60.00th=[ 235], 00:11:56.470 | 70.00th=[ 247], 80.00th=[ 273], 90.00th=[ 367], 95.00th=[ 457], 00:11:56.470 | 99.00th=[ 5080], 99.50th=[10421], 99.90th=[14877], 99.95th=[19530], 00:11:56.470 | 99.99th=[19530] 00:11:56.470 write: IOPS=1039, BW=4160KiB/s (4260kB/s)(4164KiB/1001msec); 0 zone resets 00:11:56.470 slat (usec): min=15, max=1212, avg=43.71, stdev=45.60 00:11:56.470 clat (usec): min=5, max=27107, avg=500.98, stdev=1812.45 00:11:56.470 lat (usec): min=138, max=27174, avg=544.69, stdev=1815.27 00:11:56.470 clat percentiles (usec): 00:11:56.470 | 1.00th=[ 116], 5.00th=[ 129], 10.00th=[ 135], 20.00th=[ 141], 00:11:56.470 | 30.00th=[ 145], 40.00th=[ 149], 50.00th=[ 157], 60.00th=[ 165], 00:11:56.470 | 70.00th=[ 184], 80.00th=[ 245], 90.00th=[ 420], 95.00th=[ 1254], 00:11:56.470 | 99.00th=[ 8979], 99.50th=[11731], 99.90th=[26870], 99.95th=[27132], 00:11:56.470 | 99.99th=[27132] 00:11:56.470 bw ( KiB/s): min= 4576, max= 4576, per=29.92%, avg=4576.00, stdev= 0.00, samples=1 00:11:56.470 iops : min= 1144, max= 1144, avg=1144.00, stdev= 0.00, samples=1 00:11:56.470 lat (usec) : 4=0.05%, 10=0.05%, 100=0.05%, 250=76.42%, 500=17.34% 00:11:56.470 lat (usec) : 750=1.26%, 1000=0.82% 00:11:56.470 lat (msec) : 2=1.16%, 4=0.97%, 10=1.16%, 20=0.63%, 50=0.10% 00:11:56.470 cpu : usr=1.80%, sys=5.40%, ctx=2078, majf=0, minf=11 00:11:56.470 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:56.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:56.470 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:56.470 issued rwts: total=1024,1041,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:56.470 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:56.470 job1: (groupid=0, jobs=1): err= 0: pid=75817: Sun Dec 8 05:10:45 2024 00:11:56.470 read: IOPS=494, BW=1978KiB/s (2025kB/s)(1980KiB/1001msec) 00:11:56.470 slat (usec): min=9, max=1033, avg=30.90, stdev=45.95 00:11:56.470 clat (usec): min=179, max=16255, avg=825.72, stdev=2225.50 00:11:56.470 lat (usec): min=207, max=16298, avg=856.62, stdev=2235.65 00:11:56.470 clat percentiles (usec): 00:11:56.470 | 1.00th=[ 184], 5.00th=[ 196], 10.00th=[ 206], 20.00th=[ 223], 00:11:56.470 | 30.00th=[ 233], 40.00th=[ 241], 50.00th=[ 253], 60.00th=[ 269], 00:11:56.470 | 70.00th=[ 289], 80.00th=[ 379], 90.00th=[ 619], 95.00th=[ 5538], 00:11:56.470 | 99.00th=[14877], 99.50th=[15795], 99.90th=[16319], 99.95th=[16319], 00:11:56.470 | 99.99th=[16319] 00:11:56.470 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:11:56.470 slat (usec): min=18, max=361, avg=46.81, stdev=24.28 00:11:56.470 clat (usec): min=123, max=33406, avg=1071.01, stdev=3041.01 00:11:56.470 lat (usec): min=154, max=33477, avg=1117.82, stdev=3046.80 00:11:56.470 clat percentiles (usec): 00:11:56.470 | 1.00th=[ 131], 5.00th=[ 137], 10.00th=[ 145], 20.00th=[ 151], 00:11:56.470 | 30.00th=[ 157], 40.00th=[ 165], 50.00th=[ 176], 60.00th=[ 194], 00:11:56.470 | 70.00th=[ 249], 80.00th=[ 330], 90.00th=[ 2474], 95.00th=[ 7111], 00:11:56.470 | 99.00th=[14877], 99.50th=[16909], 99.90th=[33424], 99.95th=[33424], 00:11:56.470 | 99.99th=[33424] 00:11:56.470 bw ( KiB/s): min= 3320, max= 3320, per=21.71%, avg=3320.00, stdev= 0.00, samples=1 00:11:56.470 iops : min= 830, max= 830, avg=830.00, stdev= 0.00, samples=1 00:11:56.470 lat (usec) : 250=58.59%, 500=27.51%, 750=2.38%, 1000=0.79% 00:11:56.470 lat (msec) : 2=1.69%, 4=2.38%, 10=4.07%, 20=2.48%, 50=0.10% 00:11:56.470 cpu : usr=0.70%, sys=2.90%, ctx=1019, majf=0, minf=7 00:11:56.470 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:56.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:56.470 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:56.470 issued rwts: total=495,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:56.470 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:56.470 job2: (groupid=0, jobs=1): err= 0: pid=75818: Sun Dec 8 05:10:45 2024 00:11:56.470 read: IOPS=1084, BW=4340KiB/s (4444kB/s)(4344KiB/1001msec) 00:11:56.470 slat (usec): min=11, max=565, avg=29.55, stdev=19.51 00:11:56.470 clat (usec): min=165, max=24544, avg=426.12, stdev=1154.49 00:11:56.470 lat (usec): min=190, max=24581, avg=455.67, stdev=1157.30 00:11:56.470 clat percentiles (usec): 00:11:56.470 | 1.00th=[ 186], 5.00th=[ 204], 10.00th=[ 212], 20.00th=[ 223], 00:11:56.470 | 30.00th=[ 231], 40.00th=[ 237], 50.00th=[ 245], 60.00th=[ 255], 00:11:56.470 | 70.00th=[ 273], 80.00th=[ 310], 90.00th=[ 363], 95.00th=[ 545], 00:11:56.470 | 99.00th=[ 5735], 99.50th=[ 7308], 99.90th=[12911], 99.95th=[24511], 00:11:56.470 | 99.99th=[24511] 00:11:56.470 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:11:56.470 slat (usec): min=12, max=564, avg=39.44, stdev=19.40 00:11:56.470 clat (usec): min=22, max=13945, avg=283.77, stdev=939.31 00:11:56.470 lat (usec): min=145, max=14059, avg=323.21, stdev=940.54 00:11:56.471 clat percentiles (usec): 00:11:56.471 | 1.00th=[ 120], 5.00th=[ 139], 10.00th=[ 145], 20.00th=[ 155], 00:11:56.471 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 172], 60.00th=[ 178], 00:11:56.471 | 70.00th=[ 186], 80.00th=[ 198], 90.00th=[ 235], 95.00th=[ 289], 00:11:56.471 | 99.00th=[ 3687], 99.50th=[ 9241], 99.90th=[13566], 99.95th=[13960], 00:11:56.471 | 99.99th=[13960] 00:11:56.471 bw ( KiB/s): min= 4104, max= 8192, per=40.20%, avg=6148.00, stdev=2890.65, samples=2 00:11:56.471 iops : min= 1026, max= 2048, avg=1537.00, stdev=722.66, samples=2 00:11:56.471 lat (usec) : 50=0.08%, 250=77.12%, 500=18.95%, 750=1.11%, 1000=0.31% 00:11:56.471 lat (msec) : 2=0.50%, 4=0.61%, 10=0.99%, 20=0.31%, 50=0.04% 00:11:56.471 cpu : usr=1.80%, sys=7.20%, ctx=2628, majf=0, minf=13 00:11:56.471 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:56.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:56.471 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:56.471 issued rwts: total=1086,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:56.471 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:56.471 job3: (groupid=0, jobs=1): err= 0: pid=75819: Sun Dec 8 05:10:45 2024 00:11:56.471 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:11:56.471 slat (usec): min=10, max=13011, avg=55.86, stdev=574.09 00:11:56.471 clat (usec): min=5, max=20020, avg=1162.79, stdev=2562.51 00:11:56.471 lat (usec): min=212, max=20047, avg=1218.65, stdev=2618.30 00:11:56.471 clat percentiles (usec): 00:11:56.471 | 1.00th=[ 210], 5.00th=[ 231], 10.00th=[ 253], 20.00th=[ 293], 00:11:56.471 | 30.00th=[ 310], 40.00th=[ 326], 50.00th=[ 351], 60.00th=[ 383], 00:11:56.471 | 70.00th=[ 453], 80.00th=[ 562], 90.00th=[ 2802], 95.00th=[ 6390], 00:11:56.471 | 99.00th=[15008], 99.50th=[16909], 99.90th=[20055], 99.95th=[20055], 00:11:56.471 | 99.99th=[20055] 00:11:56.471 write: IOPS=737, BW=2949KiB/s (3020kB/s)(2952KiB/1001msec); 0 zone resets 00:11:56.471 slat (usec): min=17, max=901, avg=43.02, stdev=38.24 00:11:56.471 clat (usec): min=22, max=19429, avg=460.57, stdev=1233.13 00:11:56.471 lat (usec): min=145, max=19483, avg=503.59, stdev=1237.91 00:11:56.471 clat percentiles (usec): 00:11:56.471 | 1.00th=[ 129], 5.00th=[ 139], 10.00th=[ 145], 20.00th=[ 157], 00:11:56.471 | 30.00th=[ 167], 40.00th=[ 178], 50.00th=[ 198], 60.00th=[ 243], 00:11:56.471 | 70.00th=[ 265], 80.00th=[ 297], 90.00th=[ 396], 95.00th=[ 1926], 00:11:56.471 | 99.00th=[ 5342], 99.50th=[ 9896], 99.90th=[19530], 99.95th=[19530], 00:11:56.471 | 99.99th=[19530] 00:11:56.471 bw ( KiB/s): min= 1800, max= 4104, per=19.30%, avg=2952.00, stdev=1629.17, samples=2 00:11:56.471 iops : min= 450, max= 1026, avg=738.00, stdev=407.29, samples=2 00:11:56.471 lat (usec) : 10=0.08%, 50=0.08%, 100=0.08%, 250=40.64%, 500=43.68% 00:11:56.471 lat (usec) : 750=4.40%, 1000=0.96% 00:11:56.471 lat (msec) : 2=2.40%, 4=3.20%, 10=3.44%, 20=0.96%, 50=0.08% 00:11:56.471 cpu : usr=0.50%, sys=4.20%, ctx=1256, majf=0, minf=7 00:11:56.471 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:56.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:56.471 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:56.471 issued rwts: total=512,738,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:56.471 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:56.471 00:11:56.471 Run status group 0 (all jobs): 00:11:56.471 READ: bw=12.2MiB/s (12.8MB/s), 1978KiB/s-4340KiB/s (2025kB/s-4444kB/s), io=12.2MiB (12.8MB), run=1001-1001msec 00:11:56.471 WRITE: bw=14.9MiB/s (15.7MB/s), 2046KiB/s-6138KiB/s (2095kB/s-6285kB/s), io=14.9MiB (15.7MB), run=1001-1001msec 00:11:56.471 00:11:56.471 Disk stats (read/write): 00:11:56.471 nvme0n1: ios=736/1024, merge=0/0, ticks=284/537, in_queue=821, util=85.69% 00:11:56.471 nvme0n2: ios=110/512, merge=0/0, ticks=284/558, in_queue=842, util=86.79% 00:11:56.471 nvme0n3: ios=834/1024, merge=0/0, ticks=465/358, in_queue=823, util=90.34% 00:11:56.471 nvme0n4: ios=438/512, merge=0/0, ticks=523/251, in_queue=774, util=87.79% 00:11:56.471 05:10:45 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:56.471 [global] 00:11:56.471 thread=1 00:11:56.471 invalidate=1 00:11:56.471 rw=randwrite 00:11:56.471 time_based=1 00:11:56.471 runtime=1 00:11:56.471 ioengine=libaio 00:11:56.471 direct=1 00:11:56.471 bs=4096 00:11:56.471 iodepth=1 00:11:56.471 norandommap=0 00:11:56.471 numjobs=1 00:11:56.471 00:11:56.471 verify_dump=1 00:11:56.471 verify_backlog=512 00:11:56.471 verify_state_save=0 00:11:56.471 do_verify=1 00:11:56.471 verify=crc32c-intel 00:11:56.471 [job0] 00:11:56.471 filename=/dev/nvme0n1 00:11:56.471 [job1] 00:11:56.471 filename=/dev/nvme0n2 00:11:56.471 [job2] 00:11:56.471 filename=/dev/nvme0n3 00:11:56.471 [job3] 00:11:56.471 filename=/dev/nvme0n4 00:11:56.471 Could not set queue depth (nvme0n1) 00:11:56.471 Could not set queue depth (nvme0n2) 00:11:56.471 Could not set queue depth (nvme0n3) 00:11:56.471 Could not set queue depth (nvme0n4) 00:11:56.471 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:56.471 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:56.471 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:56.471 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:56.471 fio-3.35 00:11:56.471 Starting 4 threads 00:11:57.844 00:11:57.844 job0: (groupid=0, jobs=1): err= 0: pid=75874: Sun Dec 8 05:10:47 2024 00:11:57.844 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:11:57.844 slat (usec): min=16, max=5157, avg=25.63, stdev=113.52 00:11:57.844 clat (usec): min=33, max=9221, avg=235.59, stdev=200.60 00:11:57.844 lat (usec): min=178, max=9246, avg=261.22, stdev=228.30 00:11:57.844 clat percentiles (usec): 00:11:57.844 | 1.00th=[ 194], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 217], 00:11:57.844 | 30.00th=[ 221], 40.00th=[ 225], 50.00th=[ 229], 60.00th=[ 235], 00:11:57.844 | 70.00th=[ 239], 80.00th=[ 247], 90.00th=[ 255], 95.00th=[ 265], 00:11:57.844 | 99.00th=[ 281], 99.50th=[ 293], 99.90th=[ 338], 99.95th=[ 1139], 00:11:57.844 | 99.99th=[ 9241] 00:11:57.844 write: IOPS=2255, BW=9023KiB/s (9240kB/s)(9032KiB/1001msec); 0 zone resets 00:11:57.844 slat (usec): min=18, max=107, avg=35.74, stdev= 5.83 00:11:57.844 clat (usec): min=102, max=510, avg=164.69, stdev=20.79 00:11:57.844 lat (usec): min=129, max=547, avg=200.43, stdev=22.39 00:11:57.844 clat percentiles (usec): 00:11:57.844 | 1.00th=[ 121], 5.00th=[ 139], 10.00th=[ 145], 20.00th=[ 151], 00:11:57.844 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 167], 00:11:57.844 | 70.00th=[ 172], 80.00th=[ 178], 90.00th=[ 188], 95.00th=[ 198], 00:11:57.844 | 99.00th=[ 217], 99.50th=[ 227], 99.90th=[ 326], 99.95th=[ 498], 00:11:57.844 | 99.99th=[ 510] 00:11:57.844 bw ( KiB/s): min= 9024, max= 9024, per=30.30%, avg=9024.00, stdev= 0.00, samples=1 00:11:57.844 iops : min= 2256, max= 2256, avg=2256.00, stdev= 0.00, samples=1 00:11:57.844 lat (usec) : 50=0.02%, 250=92.75%, 500=7.15%, 750=0.02% 00:11:57.844 lat (msec) : 2=0.02%, 10=0.02% 00:11:57.844 cpu : usr=3.10%, sys=9.90%, ctx=4306, majf=0, minf=17 00:11:57.844 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:57.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:57.844 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:57.844 issued rwts: total=2048,2258,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:57.844 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:57.844 job1: (groupid=0, jobs=1): err= 0: pid=75875: Sun Dec 8 05:10:47 2024 00:11:57.844 read: IOPS=1114, BW=4460KiB/s (4567kB/s)(4540KiB/1018msec) 00:11:57.844 slat (usec): min=15, max=6385, avg=36.81, stdev=188.73 00:11:57.844 clat (usec): min=56, max=19810, avg=405.43, stdev=1019.61 00:11:57.844 lat (usec): min=168, max=19852, avg=442.25, stdev=1035.98 00:11:57.844 clat percentiles (usec): 00:11:57.844 | 1.00th=[ 165], 5.00th=[ 210], 10.00th=[ 223], 20.00th=[ 235], 00:11:57.844 | 30.00th=[ 243], 40.00th=[ 251], 50.00th=[ 262], 60.00th=[ 273], 00:11:57.844 | 70.00th=[ 293], 80.00th=[ 359], 90.00th=[ 404], 95.00th=[ 494], 00:11:57.844 | 99.00th=[ 3949], 99.50th=[ 6783], 99.90th=[15926], 99.95th=[19792], 00:11:57.844 | 99.99th=[19792] 00:11:57.844 write: IOPS=1508, BW=6035KiB/s (6180kB/s)(6144KiB/1018msec); 0 zone resets 00:11:57.844 slat (usec): min=22, max=537, avg=43.54, stdev=15.72 00:11:57.844 clat (usec): min=86, max=13803, avg=285.99, stdev=541.22 00:11:57.844 lat (usec): min=135, max=13845, avg=329.53, stdev=542.74 00:11:57.844 clat percentiles (usec): 00:11:57.844 | 1.00th=[ 122], 5.00th=[ 155], 10.00th=[ 169], 20.00th=[ 184], 00:11:57.844 | 30.00th=[ 196], 40.00th=[ 210], 50.00th=[ 227], 60.00th=[ 247], 00:11:57.844 | 70.00th=[ 269], 80.00th=[ 285], 90.00th=[ 314], 95.00th=[ 343], 00:11:57.844 | 99.00th=[ 1876], 99.50th=[ 2638], 99.90th=[ 8586], 99.95th=[13829], 00:11:57.844 | 99.99th=[13829] 00:11:57.844 bw ( KiB/s): min= 4608, max= 7680, per=20.63%, avg=6144.00, stdev=2172.23, samples=2 00:11:57.844 iops : min= 1152, max= 1920, avg=1536.00, stdev=543.06, samples=2 00:11:57.844 lat (usec) : 100=0.07%, 250=50.95%, 500=45.53%, 750=1.01%, 1000=0.30% 00:11:57.844 lat (msec) : 2=0.79%, 4=0.71%, 10=0.45%, 20=0.19% 00:11:57.844 cpu : usr=1.97%, sys=8.06%, ctx=2671, majf=0, minf=9 00:11:57.844 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:57.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:57.844 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:57.844 issued rwts: total=1135,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:57.844 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:57.844 job2: (groupid=0, jobs=1): err= 0: pid=75876: Sun Dec 8 05:10:47 2024 00:11:57.844 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:11:57.844 slat (usec): min=18, max=242, avg=25.86, stdev= 6.59 00:11:57.844 clat (usec): min=181, max=7081, avg=228.84, stdev=163.74 00:11:57.844 lat (usec): min=205, max=7123, avg=254.69, stdev=164.30 00:11:57.844 clat percentiles (usec): 00:11:57.844 | 1.00th=[ 190], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 206], 00:11:57.844 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 223], 60.00th=[ 227], 00:11:57.844 | 70.00th=[ 233], 80.00th=[ 241], 90.00th=[ 251], 95.00th=[ 260], 00:11:57.844 | 99.00th=[ 285], 99.50th=[ 297], 99.90th=[ 441], 99.95th=[ 2868], 00:11:57.844 | 99.99th=[ 7111] 00:11:57.844 write: IOPS=2247, BW=8991KiB/s (9207kB/s)(9000KiB/1001msec); 0 zone resets 00:11:57.844 slat (usec): min=26, max=387, avg=38.63, stdev= 9.54 00:11:57.844 clat (usec): min=37, max=395, avg=167.92, stdev=18.63 00:11:57.844 lat (usec): min=161, max=440, avg=206.55, stdev=20.47 00:11:57.844 clat percentiles (usec): 00:11:57.844 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 149], 20.00th=[ 153], 00:11:57.844 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 172], 00:11:57.844 | 70.00th=[ 176], 80.00th=[ 182], 90.00th=[ 190], 95.00th=[ 200], 00:11:57.844 | 99.00th=[ 223], 99.50th=[ 233], 99.90th=[ 281], 99.95th=[ 326], 00:11:57.844 | 99.99th=[ 396] 00:11:57.844 bw ( KiB/s): min= 8728, max= 8728, per=29.30%, avg=8728.00, stdev= 0.00, samples=1 00:11:57.844 iops : min= 2182, max= 2182, avg=2182.00, stdev= 0.00, samples=1 00:11:57.844 lat (usec) : 50=0.02%, 100=0.02%, 250=95.07%, 500=4.84% 00:11:57.844 lat (msec) : 4=0.02%, 10=0.02% 00:11:57.844 cpu : usr=2.60%, sys=11.70%, ctx=4301, majf=0, minf=10 00:11:57.844 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:57.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:57.844 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:57.844 issued rwts: total=2048,2250,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:57.844 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:57.844 job3: (groupid=0, jobs=1): err= 0: pid=75877: Sun Dec 8 05:10:47 2024 00:11:57.844 read: IOPS=1356, BW=5425KiB/s (5555kB/s)(5512KiB/1016msec) 00:11:57.844 slat (usec): min=13, max=151, avg=30.01, stdev= 7.55 00:11:57.844 clat (usec): min=149, max=16716, avg=360.19, stdev=622.91 00:11:57.844 lat (usec): min=167, max=16745, avg=390.20, stdev=623.75 00:11:57.844 clat percentiles (usec): 00:11:57.844 | 1.00th=[ 167], 5.00th=[ 219], 10.00th=[ 231], 20.00th=[ 241], 00:11:57.844 | 30.00th=[ 249], 40.00th=[ 258], 50.00th=[ 269], 60.00th=[ 281], 00:11:57.844 | 70.00th=[ 326], 80.00th=[ 379], 90.00th=[ 469], 95.00th=[ 537], 00:11:57.844 | 99.00th=[ 1811], 99.50th=[ 4359], 99.90th=[ 8848], 99.95th=[16712], 00:11:57.844 | 99.99th=[16712] 00:11:57.844 write: IOPS=1511, BW=6047KiB/s (6192kB/s)(6144KiB/1016msec); 0 zone resets 00:11:57.844 slat (usec): min=21, max=700, avg=41.78, stdev=20.80 00:11:57.844 clat (usec): min=27, max=10082, avg=262.00, stdev=356.68 00:11:57.844 lat (usec): min=166, max=10123, avg=303.78, stdev=357.69 00:11:57.844 clat percentiles (usec): 00:11:57.844 | 1.00th=[ 153], 5.00th=[ 172], 10.00th=[ 180], 20.00th=[ 190], 00:11:57.844 | 30.00th=[ 202], 40.00th=[ 215], 50.00th=[ 231], 60.00th=[ 253], 00:11:57.844 | 70.00th=[ 273], 80.00th=[ 285], 90.00th=[ 310], 95.00th=[ 347], 00:11:57.844 | 99.00th=[ 545], 99.50th=[ 1369], 99.90th=[ 8586], 99.95th=[10028], 00:11:57.844 | 99.99th=[10028] 00:11:57.844 bw ( KiB/s): min= 4096, max= 8192, per=20.63%, avg=6144.00, stdev=2896.31, samples=2 00:11:57.844 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:11:57.844 lat (usec) : 50=0.03%, 100=0.03%, 250=45.64%, 500=50.24%, 750=2.75% 00:11:57.844 lat (usec) : 1000=0.14% 00:11:57.844 lat (msec) : 2=0.62%, 4=0.24%, 10=0.24%, 20=0.07% 00:11:57.844 cpu : usr=2.56%, sys=7.78%, ctx=2923, majf=0, minf=11 00:11:57.844 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:57.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:57.844 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:57.844 issued rwts: total=1378,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:57.844 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:57.844 00:11:57.844 Run status group 0 (all jobs): 00:11:57.844 READ: bw=25.4MiB/s (26.6MB/s), 4460KiB/s-8184KiB/s (4567kB/s-8380kB/s), io=25.8MiB (27.1MB), run=1001-1018msec 00:11:57.844 WRITE: bw=29.1MiB/s (30.5MB/s), 6035KiB/s-9023KiB/s (6180kB/s-9240kB/s), io=29.6MiB (31.0MB), run=1001-1018msec 00:11:57.844 00:11:57.844 Disk stats (read/write): 00:11:57.844 nvme0n1: ios=1763/2048, merge=0/0, ticks=526/361, in_queue=887, util=90.04% 00:11:57.844 nvme0n2: ios=1083/1536, merge=0/0, ticks=339/468, in_queue=807, util=87.85% 00:11:57.844 nvme0n3: ios=1692/2048, merge=0/0, ticks=412/359, in_queue=771, util=89.26% 00:11:57.844 nvme0n4: ios=1144/1536, merge=0/0, ticks=359/430, in_queue=789, util=89.61% 00:11:57.844 05:10:47 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:57.844 [global] 00:11:57.844 thread=1 00:11:57.844 invalidate=1 00:11:57.844 rw=write 00:11:57.844 time_based=1 00:11:57.844 runtime=1 00:11:57.844 ioengine=libaio 00:11:57.844 direct=1 00:11:57.844 bs=4096 00:11:57.844 iodepth=128 00:11:57.844 norandommap=0 00:11:57.844 numjobs=1 00:11:57.844 00:11:57.844 verify_dump=1 00:11:57.844 verify_backlog=512 00:11:57.844 verify_state_save=0 00:11:57.844 do_verify=1 00:11:57.844 verify=crc32c-intel 00:11:57.844 [job0] 00:11:57.844 filename=/dev/nvme0n1 00:11:57.844 [job1] 00:11:57.844 filename=/dev/nvme0n2 00:11:57.844 [job2] 00:11:57.844 filename=/dev/nvme0n3 00:11:57.845 [job3] 00:11:57.845 filename=/dev/nvme0n4 00:11:57.845 Could not set queue depth (nvme0n1) 00:11:57.845 Could not set queue depth (nvme0n2) 00:11:57.845 Could not set queue depth (nvme0n3) 00:11:57.845 Could not set queue depth (nvme0n4) 00:11:57.845 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:57.845 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:57.845 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:57.845 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:57.845 fio-3.35 00:11:57.845 Starting 4 threads 00:11:59.216 00:11:59.216 job0: (groupid=0, jobs=1): err= 0: pid=75930: Sun Dec 8 05:10:48 2024 00:11:59.216 read: IOPS=2432, BW=9731KiB/s (9964kB/s)(9828KiB/1010msec) 00:11:59.216 slat (usec): min=3, max=22189, avg=240.46, stdev=1432.69 00:11:59.216 clat (usec): min=2071, max=69843, avg=30908.40, stdev=13367.06 00:11:59.216 lat (usec): min=10955, max=69857, avg=31148.86, stdev=13410.83 00:11:59.216 clat percentiles (usec): 00:11:59.216 | 1.00th=[11076], 5.00th=[13435], 10.00th=[16319], 20.00th=[18744], 00:11:59.216 | 30.00th=[21890], 40.00th=[24773], 50.00th=[28443], 60.00th=[32900], 00:11:59.216 | 70.00th=[36963], 80.00th=[41681], 90.00th=[50594], 95.00th=[57934], 00:11:59.216 | 99.00th=[66847], 99.50th=[69731], 99.90th=[69731], 99.95th=[69731], 00:11:59.216 | 99.99th=[69731] 00:11:59.216 write: IOPS=2534, BW=9.90MiB/s (10.4MB/s)(10.0MiB/1010msec); 0 zone resets 00:11:59.216 slat (usec): min=11, max=16609, avg=154.15, stdev=882.06 00:11:59.216 clat (usec): min=7251, max=44710, avg=19872.37, stdev=7285.60 00:11:59.216 lat (usec): min=7273, max=45641, avg=20026.53, stdev=7335.77 00:11:59.216 clat percentiles (usec): 00:11:59.216 | 1.00th=[ 7701], 5.00th=[10028], 10.00th=[11207], 20.00th=[12911], 00:11:59.216 | 30.00th=[14484], 40.00th=[17171], 50.00th=[20055], 60.00th=[21365], 00:11:59.216 | 70.00th=[23462], 80.00th=[25297], 90.00th=[30016], 95.00th=[34341], 00:11:59.216 | 99.00th=[39584], 99.50th=[40633], 99.90th=[44827], 99.95th=[44827], 00:11:59.216 | 99.99th=[44827] 00:11:59.216 bw ( KiB/s): min=10080, max=10379, per=27.73%, avg=10229.50, stdev=211.42, samples=2 00:11:59.216 iops : min= 2520, max= 2594, avg=2557.00, stdev=52.33, samples=2 00:11:59.216 lat (msec) : 4=0.02%, 10=2.55%, 20=33.07%, 50=59.30%, 100=5.06% 00:11:59.216 cpu : usr=1.98%, sys=7.83%, ctx=268, majf=0, minf=19 00:11:59.216 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:11:59.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:59.216 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:59.216 issued rwts: total=2457,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:59.216 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:59.216 job1: (groupid=0, jobs=1): err= 0: pid=75931: Sun Dec 8 05:10:48 2024 00:11:59.216 read: IOPS=2041, BW=8167KiB/s (8364kB/s)(8192KiB/1003msec) 00:11:59.216 slat (usec): min=5, max=26865, avg=218.97, stdev=1418.62 00:11:59.216 clat (usec): min=8644, max=66719, avg=28731.01, stdev=12651.97 00:11:59.216 lat (usec): min=8657, max=66733, avg=28949.98, stdev=12703.77 00:11:59.216 clat percentiles (usec): 00:11:59.216 | 1.00th=[ 8717], 5.00th=[11469], 10.00th=[12911], 20.00th=[20055], 00:11:59.216 | 30.00th=[21890], 40.00th=[23725], 50.00th=[24773], 60.00th=[27919], 00:11:59.216 | 70.00th=[33817], 80.00th=[38011], 90.00th=[47449], 95.00th=[55837], 00:11:59.216 | 99.00th=[63177], 99.50th=[66847], 99.90th=[66847], 99.95th=[66847], 00:11:59.216 | 99.99th=[66847] 00:11:59.216 write: IOPS=2243, BW=8973KiB/s (9188kB/s)(9000KiB/1003msec); 0 zone resets 00:11:59.216 slat (usec): min=11, max=29205, avg=235.47, stdev=1515.43 00:11:59.216 clat (usec): min=195, max=73713, avg=29912.73, stdev=15818.02 00:11:59.216 lat (usec): min=2633, max=73737, avg=30148.19, stdev=15875.73 00:11:59.216 clat percentiles (usec): 00:11:59.216 | 1.00th=[ 3195], 5.00th=[10552], 10.00th=[12518], 20.00th=[17957], 00:11:59.216 | 30.00th=[20055], 40.00th=[22676], 50.00th=[26084], 60.00th=[30016], 00:11:59.216 | 70.00th=[34341], 80.00th=[42206], 90.00th=[52691], 95.00th=[67634], 00:11:59.216 | 99.00th=[73925], 99.50th=[73925], 99.90th=[73925], 99.95th=[73925], 00:11:59.216 | 99.99th=[73925] 00:11:59.216 bw ( KiB/s): min= 8143, max= 8143, per=22.07%, avg=8143.00, stdev= 0.00, samples=1 00:11:59.216 iops : min= 2035, max= 2035, avg=2035.00, stdev= 0.00, samples=1 00:11:59.216 lat (usec) : 250=0.02% 00:11:59.216 lat (msec) : 4=0.98%, 10=1.88%, 20=21.71%, 50=64.80%, 100=10.61% 00:11:59.216 cpu : usr=2.30%, sys=7.09%, ctx=236, majf=0, minf=7 00:11:59.216 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:11:59.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:59.216 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:59.216 issued rwts: total=2048,2250,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:59.216 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:59.216 job2: (groupid=0, jobs=1): err= 0: pid=75938: Sun Dec 8 05:10:48 2024 00:11:59.216 read: IOPS=2039, BW=8159KiB/s (8355kB/s)(8192KiB/1004msec) 00:11:59.216 slat (usec): min=5, max=30200, avg=229.08, stdev=1503.38 00:11:59.216 clat (usec): min=8658, max=78207, avg=32132.63, stdev=15559.46 00:11:59.216 lat (usec): min=10490, max=79492, avg=32361.72, stdev=15594.20 00:11:59.216 clat percentiles (usec): 00:11:59.216 | 1.00th=[10552], 5.00th=[11863], 10.00th=[14091], 20.00th=[17171], 00:11:59.216 | 30.00th=[21627], 40.00th=[24511], 50.00th=[28967], 60.00th=[35914], 00:11:59.216 | 70.00th=[40633], 80.00th=[44827], 90.00th=[51643], 95.00th=[65274], 00:11:59.216 | 99.00th=[73925], 99.50th=[77071], 99.90th=[78119], 99.95th=[78119], 00:11:59.216 | 99.99th=[78119] 00:11:59.216 write: IOPS=2447, BW=9789KiB/s (10.0MB/s)(9828KiB/1004msec); 0 zone resets 00:11:59.216 slat (usec): min=12, max=26588, avg=209.21, stdev=1396.14 00:11:59.216 clat (usec): min=443, max=64420, avg=24763.11, stdev=10175.13 00:11:59.217 lat (usec): min=7624, max=65360, avg=24972.32, stdev=10224.36 00:11:59.217 clat percentiles (usec): 00:11:59.217 | 1.00th=[ 8225], 5.00th=[13173], 10.00th=[14091], 20.00th=[17171], 00:11:59.217 | 30.00th=[17957], 40.00th=[18744], 50.00th=[21103], 60.00th=[25560], 00:11:59.217 | 70.00th=[28967], 80.00th=[33424], 90.00th=[38011], 95.00th=[43779], 00:11:59.217 | 99.00th=[53216], 99.50th=[54264], 99.90th=[55837], 99.95th=[58983], 00:11:59.217 | 99.99th=[64226] 00:11:59.217 bw ( KiB/s): min= 9157, max= 9456, per=25.23%, avg=9306.50, stdev=211.42, samples=2 00:11:59.217 iops : min= 2289, max= 2364, avg=2326.50, stdev=53.03, samples=2 00:11:59.217 lat (usec) : 500=0.02% 00:11:59.217 lat (msec) : 10=0.98%, 20=36.05%, 50=55.89%, 100=7.06% 00:11:59.217 cpu : usr=1.89%, sys=7.08%, ctx=205, majf=0, minf=8 00:11:59.217 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:11:59.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:59.217 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:59.217 issued rwts: total=2048,2457,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:59.217 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:59.217 job3: (groupid=0, jobs=1): err= 0: pid=75939: Sun Dec 8 05:10:48 2024 00:11:59.217 read: IOPS=1983, BW=7933KiB/s (8123kB/s)(7996KiB/1008msec) 00:11:59.217 slat (usec): min=6, max=26576, avg=268.50, stdev=1614.77 00:11:59.217 clat (usec): min=1943, max=70210, avg=33390.18, stdev=11682.12 00:11:59.217 lat (usec): min=11639, max=70233, avg=33658.68, stdev=11671.83 00:11:59.217 clat percentiles (usec): 00:11:59.217 | 1.00th=[11731], 5.00th=[20317], 10.00th=[21627], 20.00th=[24511], 00:11:59.217 | 30.00th=[26608], 40.00th=[27395], 50.00th=[30802], 60.00th=[32637], 00:11:59.217 | 70.00th=[35914], 80.00th=[44827], 90.00th=[49546], 95.00th=[55313], 00:11:59.217 | 99.00th=[68682], 99.50th=[68682], 99.90th=[68682], 99.95th=[69731], 00:11:59.217 | 99.99th=[69731] 00:11:59.217 write: IOPS=2031, BW=8127KiB/s (8322kB/s)(8192KiB/1008msec); 0 zone resets 00:11:59.217 slat (usec): min=12, max=23099, avg=219.92, stdev=1250.69 00:11:59.217 clat (usec): min=12743, max=48800, avg=29215.88, stdev=8597.59 00:11:59.217 lat (usec): min=13736, max=51227, avg=29435.81, stdev=8581.36 00:11:59.217 clat percentiles (usec): 00:11:59.217 | 1.00th=[13698], 5.00th=[16909], 10.00th=[19006], 20.00th=[21627], 00:11:59.217 | 30.00th=[23987], 40.00th=[25822], 50.00th=[27657], 60.00th=[30016], 00:11:59.217 | 70.00th=[32900], 80.00th=[38011], 90.00th=[42206], 95.00th=[44303], 00:11:59.217 | 99.00th=[47973], 99.50th=[48497], 99.90th=[49021], 99.95th=[49021], 00:11:59.217 | 99.99th=[49021] 00:11:59.217 bw ( KiB/s): min= 8192, max= 8208, per=22.23%, avg=8200.00, stdev=11.31, samples=2 00:11:59.217 iops : min= 2048, max= 2052, avg=2050.00, stdev= 2.83, samples=2 00:11:59.217 lat (msec) : 2=0.02%, 10=0.05%, 20=8.87%, 50=86.88%, 100=4.18% 00:11:59.217 cpu : usr=1.49%, sys=6.45%, ctx=251, majf=0, minf=17 00:11:59.217 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:11:59.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:59.217 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:59.217 issued rwts: total=1999,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:59.217 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:59.217 00:11:59.217 Run status group 0 (all jobs): 00:11:59.217 READ: bw=33.1MiB/s (34.7MB/s), 7933KiB/s-9731KiB/s (8123kB/s-9964kB/s), io=33.4MiB (35.0MB), run=1003-1010msec 00:11:59.217 WRITE: bw=36.0MiB/s (37.8MB/s), 8127KiB/s-9.90MiB/s (8322kB/s-10.4MB/s), io=36.4MiB (38.2MB), run=1003-1010msec 00:11:59.217 00:11:59.217 Disk stats (read/write): 00:11:59.217 nvme0n1: ios=2098/2116, merge=0/0, ticks=14544/10124, in_queue=24668, util=82.55% 00:11:59.217 nvme0n2: ios=1528/1536, merge=0/0, ticks=11225/12499, in_queue=23724, util=85.60% 00:11:59.217 nvme0n3: ios=1585/1938, merge=0/0, ticks=11559/12398, in_queue=23957, util=95.26% 00:11:59.217 nvme0n4: ios=1536/1750, merge=0/0, ticks=12429/12240, in_queue=24669, util=88.21% 00:11:59.217 05:10:48 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:59.217 [global] 00:11:59.217 thread=1 00:11:59.217 invalidate=1 00:11:59.217 rw=randwrite 00:11:59.217 time_based=1 00:11:59.217 runtime=1 00:11:59.217 ioengine=libaio 00:11:59.217 direct=1 00:11:59.217 bs=4096 00:11:59.217 iodepth=128 00:11:59.217 norandommap=0 00:11:59.217 numjobs=1 00:11:59.217 00:11:59.217 verify_dump=1 00:11:59.217 verify_backlog=512 00:11:59.217 verify_state_save=0 00:11:59.217 do_verify=1 00:11:59.217 verify=crc32c-intel 00:11:59.217 [job0] 00:11:59.217 filename=/dev/nvme0n1 00:11:59.217 [job1] 00:11:59.217 filename=/dev/nvme0n2 00:11:59.217 [job2] 00:11:59.217 filename=/dev/nvme0n3 00:11:59.217 [job3] 00:11:59.217 filename=/dev/nvme0n4 00:11:59.217 Could not set queue depth (nvme0n1) 00:11:59.217 Could not set queue depth (nvme0n2) 00:11:59.217 Could not set queue depth (nvme0n3) 00:11:59.217 Could not set queue depth (nvme0n4) 00:11:59.474 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:59.474 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:59.474 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:59.474 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:59.475 fio-3.35 00:11:59.475 Starting 4 threads 00:12:00.863 00:12:00.863 job0: (groupid=0, jobs=1): err= 0: pid=75997: Sun Dec 8 05:10:50 2024 00:12:00.863 read: IOPS=2743, BW=10.7MiB/s (11.2MB/s)(10.8MiB/1009msec) 00:12:00.863 slat (usec): min=5, max=17212, avg=206.78, stdev=970.96 00:12:00.863 clat (usec): min=6192, max=82185, avg=25167.46, stdev=16975.52 00:12:00.863 lat (usec): min=9601, max=82487, avg=25374.24, stdev=17110.73 00:12:00.863 clat percentiles (usec): 00:12:00.863 | 1.00th=[ 9765], 5.00th=[10945], 10.00th=[11863], 20.00th=[12256], 00:12:00.863 | 30.00th=[12518], 40.00th=[12649], 50.00th=[13698], 60.00th=[25035], 00:12:00.863 | 70.00th=[36439], 80.00th=[39584], 90.00th=[47973], 95.00th=[60556], 00:12:00.863 | 99.00th=[73925], 99.50th=[73925], 99.90th=[77071], 99.95th=[80217], 00:12:00.863 | 99.99th=[82314] 00:12:00.863 write: IOPS=3044, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1009msec); 0 zone resets 00:12:00.863 slat (usec): min=9, max=25743, avg=132.04, stdev=704.60 00:12:00.863 clat (usec): min=9287, max=59603, avg=18795.44, stdev=8656.77 00:12:00.863 lat (usec): min=9319, max=59630, avg=18927.48, stdev=8683.00 00:12:00.863 clat percentiles (usec): 00:12:00.863 | 1.00th=[10421], 5.00th=[11731], 10.00th=[11994], 20.00th=[12518], 00:12:00.863 | 30.00th=[13173], 40.00th=[13566], 50.00th=[13960], 60.00th=[17171], 00:12:00.863 | 70.00th=[21627], 80.00th=[23987], 90.00th=[32637], 95.00th=[34866], 00:12:00.863 | 99.00th=[57410], 99.50th=[57934], 99.90th=[59507], 99.95th=[59507], 00:12:00.863 | 99.99th=[59507] 00:12:00.863 bw ( KiB/s): min= 8192, max=16384, per=24.48%, avg=12288.00, stdev=5792.62, samples=2 00:12:00.863 iops : min= 2048, max= 4096, avg=3072.00, stdev=1448.15, samples=2 00:12:00.863 lat (msec) : 10=0.92%, 20=61.56%, 50=32.57%, 100=4.95% 00:12:00.863 cpu : usr=2.98%, sys=8.33%, ctx=522, majf=0, minf=1 00:12:00.863 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:12:00.863 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:00.863 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:00.863 issued rwts: total=2768,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:00.863 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:00.863 job1: (groupid=0, jobs=1): err= 0: pid=75998: Sun Dec 8 05:10:50 2024 00:12:00.863 read: IOPS=2031, BW=8127KiB/s (8322kB/s)(8192KiB/1008msec) 00:12:00.863 slat (usec): min=3, max=7647, avg=223.42, stdev=879.56 00:12:00.863 clat (usec): min=14865, max=54253, avg=27612.12, stdev=8566.12 00:12:00.863 lat (usec): min=14899, max=54264, avg=27835.54, stdev=8648.05 00:12:00.863 clat percentiles (usec): 00:12:00.863 | 1.00th=[16188], 5.00th=[17433], 10.00th=[17957], 20.00th=[18220], 00:12:00.863 | 30.00th=[22152], 40.00th=[23200], 50.00th=[23987], 60.00th=[29754], 00:12:00.863 | 70.00th=[32900], 80.00th=[36439], 90.00th=[39060], 95.00th=[40633], 00:12:00.863 | 99.00th=[50070], 99.50th=[50594], 99.90th=[54264], 99.95th=[54264], 00:12:00.863 | 99.99th=[54264] 00:12:00.863 write: IOPS=2495, BW=9980KiB/s (10.2MB/s)(9.82MiB/1008msec); 0 zone resets 00:12:00.864 slat (usec): min=9, max=12255, avg=208.56, stdev=878.36 00:12:00.864 clat (usec): min=5617, max=73440, avg=27731.23, stdev=12886.45 00:12:00.864 lat (usec): min=8533, max=73463, avg=27939.79, stdev=12970.23 00:12:00.864 clat percentiles (usec): 00:12:00.864 | 1.00th=[11076], 5.00th=[15533], 10.00th=[15664], 20.00th=[16057], 00:12:00.864 | 30.00th=[20579], 40.00th=[22414], 50.00th=[23987], 60.00th=[25822], 00:12:00.864 | 70.00th=[29754], 80.00th=[34341], 90.00th=[49546], 95.00th=[56361], 00:12:00.864 | 99.00th=[67634], 99.50th=[69731], 99.90th=[73925], 99.95th=[73925], 00:12:00.864 | 99.99th=[73925] 00:12:00.864 bw ( KiB/s): min= 7704, max=11414, per=19.04%, avg=9559.00, stdev=2623.37, samples=2 00:12:00.864 iops : min= 1926, max= 2853, avg=2389.50, stdev=655.49, samples=2 00:12:00.864 lat (msec) : 10=0.20%, 20=26.76%, 50=67.10%, 100=5.94% 00:12:00.864 cpu : usr=2.48%, sys=6.65%, ctx=439, majf=0, minf=8 00:12:00.864 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:12:00.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:00.864 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:00.864 issued rwts: total=2048,2515,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:00.864 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:00.864 job2: (groupid=0, jobs=1): err= 0: pid=75999: Sun Dec 8 05:10:50 2024 00:12:00.864 read: IOPS=2029, BW=8119KiB/s (8314kB/s)(8192KiB/1009msec) 00:12:00.864 slat (usec): min=5, max=7174, avg=169.83, stdev=745.65 00:12:00.864 clat (usec): min=13652, max=58597, avg=21766.04, stdev=5434.58 00:12:00.864 lat (usec): min=13676, max=58604, avg=21935.87, stdev=5481.95 00:12:00.864 clat percentiles (usec): 00:12:00.864 | 1.00th=[13960], 5.00th=[16581], 10.00th=[17433], 20.00th=[17957], 00:12:00.864 | 30.00th=[18744], 40.00th=[20317], 50.00th=[20841], 60.00th=[21890], 00:12:00.864 | 70.00th=[22938], 80.00th=[23462], 90.00th=[26084], 95.00th=[32113], 00:12:00.864 | 99.00th=[47973], 99.50th=[52691], 99.90th=[58459], 99.95th=[58459], 00:12:00.864 | 99.99th=[58459] 00:12:00.864 write: IOPS=2444, BW=9776KiB/s (10.0MB/s)(9864KiB/1009msec); 0 zone resets 00:12:00.864 slat (usec): min=11, max=8637, avg=257.23, stdev=944.25 00:12:00.864 clat (usec): min=6526, max=75098, avg=33494.14, stdev=16377.97 00:12:00.864 lat (usec): min=9305, max=75127, avg=33751.37, stdev=16481.17 00:12:00.864 clat percentiles (usec): 00:12:00.864 | 1.00th=[11076], 5.00th=[12256], 10.00th=[13829], 20.00th=[18220], 00:12:00.864 | 30.00th=[22938], 40.00th=[25560], 50.00th=[28181], 60.00th=[36963], 00:12:00.864 | 70.00th=[42206], 80.00th=[47973], 90.00th=[56886], 95.00th=[65274], 00:12:00.864 | 99.00th=[71828], 99.50th=[74974], 99.90th=[74974], 99.95th=[74974], 00:12:00.864 | 99.99th=[74974] 00:12:00.864 bw ( KiB/s): min= 7624, max=11102, per=18.65%, avg=9363.00, stdev=2459.32, samples=2 00:12:00.864 iops : min= 1906, max= 2775, avg=2340.50, stdev=614.48, samples=2 00:12:00.864 lat (msec) : 10=0.20%, 20=29.04%, 50=60.92%, 100=9.84% 00:12:00.864 cpu : usr=1.59%, sys=7.44%, ctx=290, majf=0, minf=7 00:12:00.864 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:12:00.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:00.864 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:00.864 issued rwts: total=2048,2466,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:00.864 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:00.864 job3: (groupid=0, jobs=1): err= 0: pid=76000: Sun Dec 8 05:10:50 2024 00:12:00.864 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:12:00.864 slat (usec): min=6, max=3901, avg=104.01, stdev=501.58 00:12:00.864 clat (usec): min=2932, max=17510, avg=13676.44, stdev=1604.37 00:12:00.864 lat (usec): min=2943, max=17526, avg=13780.45, stdev=1533.95 00:12:00.864 clat percentiles (usec): 00:12:00.864 | 1.00th=[ 6652], 5.00th=[11338], 10.00th=[12649], 20.00th=[12911], 00:12:00.864 | 30.00th=[13304], 40.00th=[13304], 50.00th=[13698], 60.00th=[13960], 00:12:00.864 | 70.00th=[14484], 80.00th=[14615], 90.00th=[15139], 95.00th=[15795], 00:12:00.864 | 99.00th=[17433], 99.50th=[17433], 99.90th=[17433], 99.95th=[17433], 00:12:00.864 | 99.99th=[17433] 00:12:00.864 write: IOPS=4599, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:12:00.864 slat (usec): min=9, max=5083, avg=105.20, stdev=457.66 00:12:00.864 clat (usec): min=277, max=17831, avg=13746.66, stdev=1195.77 00:12:00.864 lat (usec): min=2926, max=17887, avg=13851.86, stdev=1107.76 00:12:00.864 clat percentiles (usec): 00:12:00.864 | 1.00th=[10683], 5.00th=[12518], 10.00th=[12780], 20.00th=[12911], 00:12:00.864 | 30.00th=[13173], 40.00th=[13173], 50.00th=[13304], 60.00th=[13829], 00:12:00.864 | 70.00th=[14222], 80.00th=[14615], 90.00th=[15533], 95.00th=[15664], 00:12:00.864 | 99.00th=[17695], 99.50th=[17695], 99.90th=[17695], 99.95th=[17695], 00:12:00.864 | 99.99th=[17957] 00:12:00.864 bw ( KiB/s): min=17531, max=19368, per=36.75%, avg=18449.50, stdev=1298.96, samples=2 00:12:00.864 iops : min= 4382, max= 4842, avg=4612.00, stdev=325.27, samples=2 00:12:00.864 lat (usec) : 500=0.01% 00:12:00.864 lat (msec) : 4=0.35%, 10=0.75%, 20=98.89% 00:12:00.864 cpu : usr=4.00%, sys=12.89%, ctx=291, majf=0, minf=3 00:12:00.864 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:12:00.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:00.864 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:00.864 issued rwts: total=4608,4609,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:00.864 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:00.864 00:12:00.864 Run status group 0 (all jobs): 00:12:00.864 READ: bw=44.4MiB/s (46.6MB/s), 8119KiB/s-18.0MiB/s (8314kB/s-18.8MB/s), io=44.8MiB (47.0MB), run=1002-1009msec 00:12:00.864 WRITE: bw=49.0MiB/s (51.4MB/s), 9776KiB/s-18.0MiB/s (10.0MB/s-18.8MB/s), io=49.5MiB (51.9MB), run=1002-1009msec 00:12:00.864 00:12:00.864 Disk stats (read/write): 00:12:00.864 nvme0n1: ios=2580/2560, merge=0/0, ticks=21095/12175, in_queue=33270, util=86.46% 00:12:00.864 nvme0n2: ios=1834/2048, merge=0/0, ticks=16349/17418, in_queue=33767, util=88.00% 00:12:00.864 nvme0n3: ios=2040/2055, merge=0/0, ticks=14069/19958, in_queue=34027, util=88.47% 00:12:00.864 nvme0n4: ios=3616/4096, merge=0/0, ticks=11462/12350, in_queue=23812, util=89.53% 00:12:00.864 05:10:50 -- target/fio.sh@55 -- # sync 00:12:00.864 05:10:50 -- target/fio.sh@59 -- # fio_pid=76013 00:12:00.864 05:10:50 -- target/fio.sh@61 -- # sleep 3 00:12:00.864 05:10:50 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:12:00.864 [global] 00:12:00.864 thread=1 00:12:00.864 invalidate=1 00:12:00.864 rw=read 00:12:00.864 time_based=1 00:12:00.864 runtime=10 00:12:00.864 ioengine=libaio 00:12:00.864 direct=1 00:12:00.864 bs=4096 00:12:00.864 iodepth=1 00:12:00.864 norandommap=1 00:12:00.864 numjobs=1 00:12:00.864 00:12:00.864 [job0] 00:12:00.864 filename=/dev/nvme0n1 00:12:00.864 [job1] 00:12:00.864 filename=/dev/nvme0n2 00:12:00.864 [job2] 00:12:00.864 filename=/dev/nvme0n3 00:12:00.864 [job3] 00:12:00.864 filename=/dev/nvme0n4 00:12:00.864 Could not set queue depth (nvme0n1) 00:12:00.864 Could not set queue depth (nvme0n2) 00:12:00.864 Could not set queue depth (nvme0n3) 00:12:00.864 Could not set queue depth (nvme0n4) 00:12:00.864 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:00.864 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:00.864 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:00.864 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:00.864 fio-3.35 00:12:00.864 Starting 4 threads 00:12:04.146 05:10:53 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:12:04.146 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=27492352, buflen=4096 00:12:04.146 fio: pid=76056, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:04.146 05:10:53 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:12:04.711 fio: pid=76055, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:04.711 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=34332672, buflen=4096 00:12:04.711 05:10:54 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:04.712 05:10:54 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:12:04.969 fio: pid=76053, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:04.969 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=63512576, buflen=4096 00:12:04.969 05:10:54 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:04.969 05:10:54 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:12:05.227 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=46854144, buflen=4096 00:12:05.227 fio: pid=76054, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:05.484 05:10:55 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:05.484 05:10:55 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:12:05.484 00:12:05.484 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=76053: Sun Dec 8 05:10:55 2024 00:12:05.484 read: IOPS=3917, BW=15.3MiB/s (16.0MB/s)(60.6MiB/3958msec) 00:12:05.484 slat (usec): min=8, max=18115, avg=27.76, stdev=247.01 00:12:05.484 clat (usec): min=129, max=27999, avg=224.92, stdev=385.78 00:12:05.484 lat (usec): min=143, max=28018, avg=252.68, stdev=461.32 00:12:05.484 clat percentiles (usec): 00:12:05.484 | 1.00th=[ 143], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 167], 00:12:05.484 | 30.00th=[ 178], 40.00th=[ 194], 50.00th=[ 208], 60.00th=[ 217], 00:12:05.484 | 70.00th=[ 227], 80.00th=[ 237], 90.00th=[ 262], 95.00th=[ 343], 00:12:05.484 | 99.00th=[ 465], 99.50th=[ 529], 99.90th=[ 2835], 99.95th=[ 6980], 00:12:05.484 | 99.99th=[20317] 00:12:05.484 bw ( KiB/s): min= 7289, max=19688, per=40.34%, avg=15788.71, stdev=4306.12, samples=7 00:12:05.484 iops : min= 1822, max= 4922, avg=3947.14, stdev=1076.61, samples=7 00:12:05.484 lat (usec) : 250=87.03%, 500=12.34%, 750=0.33%, 1000=0.05% 00:12:05.484 lat (msec) : 2=0.10%, 4=0.08%, 10=0.03%, 20=0.03%, 50=0.01% 00:12:05.484 cpu : usr=1.95%, sys=8.52%, ctx=15515, majf=0, minf=1 00:12:05.484 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:05.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:05.484 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:05.484 issued rwts: total=15507,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:05.484 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:05.484 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=76054: Sun Dec 8 05:10:55 2024 00:12:05.484 read: IOPS=2662, BW=10.4MiB/s (10.9MB/s)(44.7MiB/4297msec) 00:12:05.484 slat (usec): min=7, max=15678, avg=26.84, stdev=234.64 00:12:05.484 clat (usec): min=127, max=17423, avg=346.42, stdev=403.79 00:12:05.484 lat (usec): min=139, max=17453, avg=373.26, stdev=467.38 00:12:05.484 clat percentiles (usec): 00:12:05.484 | 1.00th=[ 143], 5.00th=[ 157], 10.00th=[ 176], 20.00th=[ 206], 00:12:05.484 | 30.00th=[ 225], 40.00th=[ 265], 50.00th=[ 318], 60.00th=[ 359], 00:12:05.484 | 70.00th=[ 412], 80.00th=[ 449], 90.00th=[ 490], 95.00th=[ 515], 00:12:05.484 | 99.00th=[ 660], 99.50th=[ 1483], 99.90th=[ 6915], 99.95th=[ 9765], 00:12:05.484 | 99.99th=[15008] 00:12:05.484 bw ( KiB/s): min= 8056, max=18304, per=26.53%, avg=10382.50, stdev=3381.45, samples=8 00:12:05.484 iops : min= 2014, max= 4576, avg=2595.50, stdev=845.38, samples=8 00:12:05.484 lat (usec) : 250=37.93%, 500=54.74%, 750=6.52%, 1000=0.12% 00:12:05.484 lat (msec) : 2=0.29%, 4=0.21%, 10=0.15%, 20=0.03% 00:12:05.484 cpu : usr=1.14%, sys=5.73%, ctx=11475, majf=0, minf=2 00:12:05.484 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:05.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:05.484 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:05.484 issued rwts: total=11440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:05.484 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:05.484 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=76055: Sun Dec 8 05:10:55 2024 00:12:05.484 read: IOPS=2388, BW=9555KiB/s (9784kB/s)(32.7MiB/3509msec) 00:12:05.484 slat (usec): min=7, max=14607, avg=25.94, stdev=204.26 00:12:05.484 clat (usec): min=157, max=15823, avg=390.39, stdev=455.73 00:12:05.484 lat (usec): min=171, max=15837, avg=416.33, stdev=499.21 00:12:05.484 clat percentiles (usec): 00:12:05.484 | 1.00th=[ 198], 5.00th=[ 217], 10.00th=[ 229], 20.00th=[ 251], 00:12:05.484 | 30.00th=[ 297], 40.00th=[ 326], 50.00th=[ 367], 60.00th=[ 412], 00:12:05.484 | 70.00th=[ 441], 80.00th=[ 465], 90.00th=[ 498], 95.00th=[ 529], 00:12:05.484 | 99.00th=[ 627], 99.50th=[ 1237], 99.90th=[ 8586], 99.95th=[11731], 00:12:05.484 | 99.99th=[15795] 00:12:05.484 bw ( KiB/s): min= 6800, max=13375, per=23.84%, avg=9331.83, stdev=2255.83, samples=6 00:12:05.484 iops : min= 1700, max= 3343, avg=2332.83, stdev=563.69, samples=6 00:12:05.484 lat (usec) : 250=19.87%, 500=70.74%, 750=8.62%, 1000=0.17% 00:12:05.484 lat (msec) : 2=0.24%, 4=0.12%, 10=0.17%, 20=0.06% 00:12:05.484 cpu : usr=1.31%, sys=5.05%, ctx=8414, majf=0, minf=2 00:12:05.484 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:05.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:05.484 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:05.484 issued rwts: total=8383,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:05.484 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:05.484 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=76056: Sun Dec 8 05:10:55 2024 00:12:05.484 read: IOPS=2155, BW=8622KiB/s (8829kB/s)(26.2MiB/3114msec) 00:12:05.484 slat (usec): min=7, max=315, avg=23.20, stdev=10.28 00:12:05.484 clat (usec): min=83, max=27961, avg=437.92, stdev=663.52 00:12:05.484 lat (usec): min=185, max=27994, avg=461.12, stdev=663.78 00:12:05.484 clat percentiles (usec): 00:12:05.484 | 1.00th=[ 229], 5.00th=[ 255], 10.00th=[ 269], 20.00th=[ 297], 00:12:05.484 | 30.00th=[ 334], 40.00th=[ 383], 50.00th=[ 416], 60.00th=[ 437], 00:12:05.484 | 70.00th=[ 453], 80.00th=[ 478], 90.00th=[ 510], 95.00th=[ 545], 00:12:05.484 | 99.00th=[ 725], 99.50th=[ 2114], 99.90th=[11600], 99.95th=[14877], 00:12:05.484 | 99.99th=[27919] 00:12:05.484 bw ( KiB/s): min= 6688, max=10144, per=22.11%, avg=8652.50, stdev=1191.69, samples=6 00:12:05.484 iops : min= 1672, max= 2536, avg=2163.00, stdev=297.81, samples=6 00:12:05.484 lat (usec) : 100=0.01%, 250=3.37%, 500=83.91%, 750=11.75%, 1000=0.12% 00:12:05.484 lat (msec) : 2=0.30%, 4=0.24%, 10=0.16%, 20=0.09%, 50=0.03% 00:12:05.484 cpu : usr=1.16%, sys=4.72%, ctx=6737, majf=0, minf=1 00:12:05.484 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:05.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:05.484 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:05.484 issued rwts: total=6713,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:05.484 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:05.484 00:12:05.484 Run status group 0 (all jobs): 00:12:05.484 READ: bw=38.2MiB/s (40.1MB/s), 8622KiB/s-15.3MiB/s (8829kB/s-16.0MB/s), io=164MiB (172MB), run=3114-4297msec 00:12:05.484 00:12:05.484 Disk stats (read/write): 00:12:05.484 nvme0n1: ios=15127/0, merge=0/0, ticks=3452/0, in_queue=3452, util=95.04% 00:12:05.484 nvme0n2: ios=10657/0, merge=0/0, ticks=3502/0, in_queue=3502, util=95.49% 00:12:05.484 nvme0n3: ios=7903/0, merge=0/0, ticks=2865/0, in_queue=2865, util=95.98% 00:12:05.484 nvme0n4: ios=6704/0, merge=0/0, ticks=2655/0, in_queue=2655, util=96.78% 00:12:05.742 05:10:55 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:05.742 05:10:55 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:12:05.999 05:10:55 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:05.999 05:10:55 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:12:06.256 05:10:56 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:06.256 05:10:56 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:12:06.822 05:10:56 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:06.822 05:10:56 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:12:07.080 05:10:56 -- target/fio.sh@69 -- # fio_status=0 00:12:07.080 05:10:56 -- target/fio.sh@70 -- # wait 76013 00:12:07.080 05:10:56 -- target/fio.sh@70 -- # fio_status=4 00:12:07.080 05:10:56 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:07.080 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.080 05:10:56 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:07.080 05:10:56 -- common/autotest_common.sh@1208 -- # local i=0 00:12:07.080 05:10:56 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:07.080 05:10:56 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:07.080 05:10:56 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:07.080 05:10:56 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:07.080 nvmf hotplug test: fio failed as expected 00:12:07.080 05:10:56 -- common/autotest_common.sh@1220 -- # return 0 00:12:07.080 05:10:56 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:12:07.080 05:10:56 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:12:07.080 05:10:56 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:07.339 05:10:56 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:12:07.339 05:10:56 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:12:07.339 05:10:56 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:12:07.339 05:10:56 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:12:07.339 05:10:56 -- target/fio.sh@91 -- # nvmftestfini 00:12:07.339 05:10:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:07.339 05:10:56 -- nvmf/common.sh@116 -- # sync 00:12:07.339 05:10:56 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:07.339 05:10:56 -- nvmf/common.sh@119 -- # set +e 00:12:07.339 05:10:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:07.339 05:10:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:07.339 rmmod nvme_tcp 00:12:07.339 rmmod nvme_fabrics 00:12:07.339 rmmod nvme_keyring 00:12:07.339 05:10:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:07.339 05:10:57 -- nvmf/common.sh@123 -- # set -e 00:12:07.339 05:10:57 -- nvmf/common.sh@124 -- # return 0 00:12:07.339 05:10:57 -- nvmf/common.sh@477 -- # '[' -n 75619 ']' 00:12:07.339 05:10:57 -- nvmf/common.sh@478 -- # killprocess 75619 00:12:07.339 05:10:57 -- common/autotest_common.sh@936 -- # '[' -z 75619 ']' 00:12:07.339 05:10:57 -- common/autotest_common.sh@940 -- # kill -0 75619 00:12:07.339 05:10:57 -- common/autotest_common.sh@941 -- # uname 00:12:07.339 05:10:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:07.339 05:10:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75619 00:12:07.339 killing process with pid 75619 00:12:07.339 05:10:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:07.339 05:10:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:07.340 05:10:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75619' 00:12:07.340 05:10:57 -- common/autotest_common.sh@955 -- # kill 75619 00:12:07.340 05:10:57 -- common/autotest_common.sh@960 -- # wait 75619 00:12:07.598 05:10:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:07.598 05:10:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:07.598 05:10:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:07.598 05:10:57 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:07.598 05:10:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:07.598 05:10:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.598 05:10:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:07.598 05:10:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.598 05:10:57 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:07.598 00:12:07.598 real 0m22.204s 00:12:07.598 user 1m22.000s 00:12:07.598 sys 0m11.791s 00:12:07.598 05:10:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:07.598 05:10:57 -- common/autotest_common.sh@10 -- # set +x 00:12:07.598 ************************************ 00:12:07.598 END TEST nvmf_fio_target 00:12:07.598 ************************************ 00:12:07.598 05:10:57 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:07.598 05:10:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:07.598 05:10:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:07.598 05:10:57 -- common/autotest_common.sh@10 -- # set +x 00:12:07.598 ************************************ 00:12:07.598 START TEST nvmf_bdevio 00:12:07.598 ************************************ 00:12:07.598 05:10:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:07.598 * Looking for test storage... 00:12:07.598 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:07.598 05:10:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:07.599 05:10:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:07.599 05:10:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:07.857 05:10:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:07.857 05:10:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:07.857 05:10:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:07.857 05:10:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:07.857 05:10:57 -- scripts/common.sh@335 -- # IFS=.-: 00:12:07.857 05:10:57 -- scripts/common.sh@335 -- # read -ra ver1 00:12:07.857 05:10:57 -- scripts/common.sh@336 -- # IFS=.-: 00:12:07.857 05:10:57 -- scripts/common.sh@336 -- # read -ra ver2 00:12:07.857 05:10:57 -- scripts/common.sh@337 -- # local 'op=<' 00:12:07.857 05:10:57 -- scripts/common.sh@339 -- # ver1_l=2 00:12:07.857 05:10:57 -- scripts/common.sh@340 -- # ver2_l=1 00:12:07.857 05:10:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:07.857 05:10:57 -- scripts/common.sh@343 -- # case "$op" in 00:12:07.857 05:10:57 -- scripts/common.sh@344 -- # : 1 00:12:07.857 05:10:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:07.857 05:10:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:07.857 05:10:57 -- scripts/common.sh@364 -- # decimal 1 00:12:07.858 05:10:57 -- scripts/common.sh@352 -- # local d=1 00:12:07.858 05:10:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:07.858 05:10:57 -- scripts/common.sh@354 -- # echo 1 00:12:07.858 05:10:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:07.858 05:10:57 -- scripts/common.sh@365 -- # decimal 2 00:12:07.858 05:10:57 -- scripts/common.sh@352 -- # local d=2 00:12:07.858 05:10:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:07.858 05:10:57 -- scripts/common.sh@354 -- # echo 2 00:12:07.858 05:10:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:07.858 05:10:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:07.858 05:10:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:07.858 05:10:57 -- scripts/common.sh@367 -- # return 0 00:12:07.858 05:10:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:07.858 05:10:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:07.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.858 --rc genhtml_branch_coverage=1 00:12:07.858 --rc genhtml_function_coverage=1 00:12:07.858 --rc genhtml_legend=1 00:12:07.858 --rc geninfo_all_blocks=1 00:12:07.858 --rc geninfo_unexecuted_blocks=1 00:12:07.858 00:12:07.858 ' 00:12:07.858 05:10:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:07.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.858 --rc genhtml_branch_coverage=1 00:12:07.858 --rc genhtml_function_coverage=1 00:12:07.858 --rc genhtml_legend=1 00:12:07.858 --rc geninfo_all_blocks=1 00:12:07.858 --rc geninfo_unexecuted_blocks=1 00:12:07.858 00:12:07.858 ' 00:12:07.858 05:10:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:07.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.858 --rc genhtml_branch_coverage=1 00:12:07.858 --rc genhtml_function_coverage=1 00:12:07.858 --rc genhtml_legend=1 00:12:07.858 --rc geninfo_all_blocks=1 00:12:07.858 --rc geninfo_unexecuted_blocks=1 00:12:07.858 00:12:07.858 ' 00:12:07.858 05:10:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:07.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.858 --rc genhtml_branch_coverage=1 00:12:07.858 --rc genhtml_function_coverage=1 00:12:07.858 --rc genhtml_legend=1 00:12:07.858 --rc geninfo_all_blocks=1 00:12:07.858 --rc geninfo_unexecuted_blocks=1 00:12:07.858 00:12:07.858 ' 00:12:07.858 05:10:57 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:07.858 05:10:57 -- nvmf/common.sh@7 -- # uname -s 00:12:07.858 05:10:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:07.858 05:10:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:07.858 05:10:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:07.858 05:10:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:07.858 05:10:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:07.858 05:10:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:07.858 05:10:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:07.858 05:10:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:07.858 05:10:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:07.858 05:10:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:07.858 05:10:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:12:07.858 05:10:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:12:07.858 05:10:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:07.858 05:10:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:07.858 05:10:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:07.858 05:10:57 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:07.858 05:10:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:07.858 05:10:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:07.858 05:10:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:07.858 05:10:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.858 05:10:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.858 05:10:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.858 05:10:57 -- paths/export.sh@5 -- # export PATH 00:12:07.858 05:10:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.858 05:10:57 -- nvmf/common.sh@46 -- # : 0 00:12:07.858 05:10:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:07.858 05:10:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:07.858 05:10:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:07.858 05:10:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:07.858 05:10:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:07.858 05:10:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:07.858 05:10:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:07.858 05:10:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:07.858 05:10:57 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:07.858 05:10:57 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:07.858 05:10:57 -- target/bdevio.sh@14 -- # nvmftestinit 00:12:07.858 05:10:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:07.858 05:10:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:07.858 05:10:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:07.858 05:10:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:07.858 05:10:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:07.858 05:10:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.858 05:10:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:07.858 05:10:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.858 05:10:57 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:07.858 05:10:57 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:07.858 05:10:57 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:07.858 05:10:57 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:07.858 05:10:57 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:07.858 05:10:57 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:07.858 05:10:57 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:07.858 05:10:57 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:07.858 05:10:57 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:07.858 05:10:57 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:07.858 05:10:57 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:07.858 05:10:57 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:07.858 05:10:57 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:07.858 05:10:57 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:07.858 05:10:57 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:07.858 05:10:57 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:07.858 05:10:57 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:07.858 05:10:57 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:07.858 05:10:57 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:07.858 05:10:57 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:07.858 Cannot find device "nvmf_tgt_br" 00:12:07.858 05:10:57 -- nvmf/common.sh@154 -- # true 00:12:07.858 05:10:57 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:07.858 Cannot find device "nvmf_tgt_br2" 00:12:07.858 05:10:57 -- nvmf/common.sh@155 -- # true 00:12:07.858 05:10:57 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:07.858 05:10:57 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:07.858 Cannot find device "nvmf_tgt_br" 00:12:07.858 05:10:57 -- nvmf/common.sh@157 -- # true 00:12:07.858 05:10:57 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:07.858 Cannot find device "nvmf_tgt_br2" 00:12:07.858 05:10:57 -- nvmf/common.sh@158 -- # true 00:12:07.858 05:10:57 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:07.858 05:10:57 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:07.858 05:10:57 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:07.858 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:07.858 05:10:57 -- nvmf/common.sh@161 -- # true 00:12:07.858 05:10:57 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:07.858 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:07.858 05:10:57 -- nvmf/common.sh@162 -- # true 00:12:07.858 05:10:57 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:07.858 05:10:57 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:08.117 05:10:57 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:08.117 05:10:57 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:08.117 05:10:57 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:08.117 05:10:57 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:08.117 05:10:57 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:08.117 05:10:57 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:08.117 05:10:57 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:08.117 05:10:57 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:08.117 05:10:57 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:08.117 05:10:57 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:08.117 05:10:57 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:08.117 05:10:57 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:08.117 05:10:57 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:08.117 05:10:57 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:08.117 05:10:57 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:08.117 05:10:57 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:08.117 05:10:57 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:08.117 05:10:57 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:08.117 05:10:57 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:08.117 05:10:57 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:08.117 05:10:57 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:08.117 05:10:57 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:08.117 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:08.117 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:12:08.117 00:12:08.117 --- 10.0.0.2 ping statistics --- 00:12:08.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.117 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:12:08.117 05:10:57 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:08.117 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:08.117 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:12:08.117 00:12:08.117 --- 10.0.0.3 ping statistics --- 00:12:08.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.117 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:12:08.117 05:10:57 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:08.117 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:08.117 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:12:08.117 00:12:08.117 --- 10.0.0.1 ping statistics --- 00:12:08.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.117 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:12:08.117 05:10:57 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:08.117 05:10:57 -- nvmf/common.sh@421 -- # return 0 00:12:08.117 05:10:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:08.117 05:10:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:08.117 05:10:57 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:08.117 05:10:57 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:08.117 05:10:57 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:08.117 05:10:57 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:08.117 05:10:57 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:08.117 05:10:57 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:08.117 05:10:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:08.117 05:10:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:08.117 05:10:57 -- common/autotest_common.sh@10 -- # set +x 00:12:08.117 05:10:57 -- nvmf/common.sh@469 -- # nvmfpid=76343 00:12:08.117 05:10:57 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:12:08.117 05:10:57 -- nvmf/common.sh@470 -- # waitforlisten 76343 00:12:08.117 05:10:57 -- common/autotest_common.sh@829 -- # '[' -z 76343 ']' 00:12:08.117 05:10:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:08.117 05:10:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:08.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:08.117 05:10:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:08.117 05:10:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:08.117 05:10:57 -- common/autotest_common.sh@10 -- # set +x 00:12:08.375 [2024-12-08 05:10:57.959489] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:08.376 [2024-12-08 05:10:57.959600] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:08.376 [2024-12-08 05:10:58.102353] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:08.376 [2024-12-08 05:10:58.147190] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:08.376 [2024-12-08 05:10:58.147404] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:08.376 [2024-12-08 05:10:58.147432] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:08.376 [2024-12-08 05:10:58.147448] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:08.376 [2024-12-08 05:10:58.147562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:12:08.376 [2024-12-08 05:10:58.148106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:12:08.376 [2024-12-08 05:10:58.148208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:12:08.376 [2024-12-08 05:10:58.148225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:08.634 05:10:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:08.634 05:10:58 -- common/autotest_common.sh@862 -- # return 0 00:12:08.634 05:10:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:08.634 05:10:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:08.634 05:10:58 -- common/autotest_common.sh@10 -- # set +x 00:12:08.634 05:10:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:08.634 05:10:58 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:08.634 05:10:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.634 05:10:58 -- common/autotest_common.sh@10 -- # set +x 00:12:08.634 [2024-12-08 05:10:58.404705] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:08.893 05:10:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.893 05:10:58 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:08.893 05:10:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.893 05:10:58 -- common/autotest_common.sh@10 -- # set +x 00:12:08.893 Malloc0 00:12:08.893 05:10:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.893 05:10:58 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:08.893 05:10:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.893 05:10:58 -- common/autotest_common.sh@10 -- # set +x 00:12:08.893 05:10:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.893 05:10:58 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:08.893 05:10:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.893 05:10:58 -- common/autotest_common.sh@10 -- # set +x 00:12:08.893 05:10:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.893 05:10:58 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:08.893 05:10:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.893 05:10:58 -- common/autotest_common.sh@10 -- # set +x 00:12:08.893 [2024-12-08 05:10:58.466293] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:08.893 05:10:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.893 05:10:58 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:12:08.893 05:10:58 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:08.893 05:10:58 -- nvmf/common.sh@520 -- # config=() 00:12:08.893 05:10:58 -- nvmf/common.sh@520 -- # local subsystem config 00:12:08.893 05:10:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:12:08.893 05:10:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:12:08.893 { 00:12:08.893 "params": { 00:12:08.893 "name": "Nvme$subsystem", 00:12:08.893 "trtype": "$TEST_TRANSPORT", 00:12:08.893 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:08.893 "adrfam": "ipv4", 00:12:08.893 "trsvcid": "$NVMF_PORT", 00:12:08.893 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:08.893 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:08.893 "hdgst": ${hdgst:-false}, 00:12:08.893 "ddgst": ${ddgst:-false} 00:12:08.893 }, 00:12:08.893 "method": "bdev_nvme_attach_controller" 00:12:08.893 } 00:12:08.893 EOF 00:12:08.893 )") 00:12:08.893 05:10:58 -- nvmf/common.sh@542 -- # cat 00:12:08.893 05:10:58 -- nvmf/common.sh@544 -- # jq . 00:12:08.893 05:10:58 -- nvmf/common.sh@545 -- # IFS=, 00:12:08.893 05:10:58 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:12:08.893 "params": { 00:12:08.893 "name": "Nvme1", 00:12:08.893 "trtype": "tcp", 00:12:08.893 "traddr": "10.0.0.2", 00:12:08.893 "adrfam": "ipv4", 00:12:08.893 "trsvcid": "4420", 00:12:08.893 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:08.893 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:08.893 "hdgst": false, 00:12:08.893 "ddgst": false 00:12:08.893 }, 00:12:08.893 "method": "bdev_nvme_attach_controller" 00:12:08.893 }' 00:12:08.893 [2024-12-08 05:10:58.519852] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:08.893 [2024-12-08 05:10:58.520428] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76367 ] 00:12:08.893 [2024-12-08 05:10:58.661197] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:09.152 [2024-12-08 05:10:58.704462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:09.152 [2024-12-08 05:10:58.704573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.152 [2024-12-08 05:10:58.704562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:09.152 [2024-12-08 05:10:58.848043] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:12:09.152 [2024-12-08 05:10:58.848117] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:12:09.152 I/O targets: 00:12:09.152 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:09.152 00:12:09.152 00:12:09.152 CUnit - A unit testing framework for C - Version 2.1-3 00:12:09.152 http://cunit.sourceforge.net/ 00:12:09.152 00:12:09.152 00:12:09.152 Suite: bdevio tests on: Nvme1n1 00:12:09.152 Test: blockdev write read block ...passed 00:12:09.152 Test: blockdev write zeroes read block ...passed 00:12:09.152 Test: blockdev write zeroes read no split ...passed 00:12:09.152 Test: blockdev write zeroes read split ...passed 00:12:09.152 Test: blockdev write zeroes read split partial ...passed 00:12:09.152 Test: blockdev reset ...[2024-12-08 05:10:58.885539] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:12:09.152 [2024-12-08 05:10:58.886992] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1641ea0 (9): Bad file descriptor 00:12:09.152 [2024-12-08 05:10:58.906125] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:09.152 passed 00:12:09.152 Test: blockdev write read 8 blocks ...passed 00:12:09.152 Test: blockdev write read size > 128k ...passed 00:12:09.152 Test: blockdev write read invalid size ...passed 00:12:09.152 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:09.152 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:09.152 Test: blockdev write read max offset ...passed 00:12:09.152 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:09.152 Test: blockdev writev readv 8 blocks ...passed 00:12:09.152 Test: blockdev writev readv 30 x 1block ...passed 00:12:09.152 Test: blockdev writev readv block ...passed 00:12:09.152 Test: blockdev writev readv size > 128k ...passed 00:12:09.152 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:09.152 Test: blockdev comparev and writev ...passed 00:12:09.152 Test: blockdev nvme passthru rw ...passed 00:12:09.152 Test: blockdev nvme passthru vendor specific ...passed 00:12:09.152 Test: blockdev nvme admin passthru ...[2024-12-08 05:10:58.922829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:09.152 [2024-12-08 05:10:58.922909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:09.152 [2024-12-08 05:10:58.922943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:09.152 [2024-12-08 05:10:58.922963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:09.152 [2024-12-08 05:10:58.923362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:09.152 [2024-12-08 05:10:58.923391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:09.152 [2024-12-08 05:10:58.923418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:09.153 [2024-12-08 05:10:58.923438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:09.153 [2024-12-08 05:10:58.923841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:09.153 [2024-12-08 05:10:58.923869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:09.153 [2024-12-08 05:10:58.923895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:09.153 [2024-12-08 05:10:58.923913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:09.153 [2024-12-08 05:10:58.924326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:09.153 [2024-12-08 05:10:58.924356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:09.153 [2024-12-08 05:10:58.924383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:09.153 [2024-12-08 05:10:58.924401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:09.153 [2024-12-08 05:10:58.925457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:09.153 [2024-12-08 05:10:58.925490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:09.153 [2024-12-08 05:10:58.925653] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:09.153 [2024-12-08 05:10:58.925720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:09.153 [2024-12-08 05:10:58.925899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:09.153 [2024-12-08 05:10:58.925926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:09.153 [2024-12-08 05:10:58.926086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:09.153 [2024-12-08 05:10:58.926113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:09.411 passed 00:12:09.411 Test: blockdev copy ...passed 00:12:09.411 00:12:09.411 Run Summary: Type Total Ran Passed Failed Inactive 00:12:09.411 suites 1 1 n/a 0 0 00:12:09.411 tests 23 23 23 0 0 00:12:09.411 asserts 152 152 152 0 n/a 00:12:09.411 00:12:09.411 Elapsed time = 0.176 seconds 00:12:09.411 05:10:59 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:09.411 05:10:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.411 05:10:59 -- common/autotest_common.sh@10 -- # set +x 00:12:09.411 05:10:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.411 05:10:59 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:09.411 05:10:59 -- target/bdevio.sh@30 -- # nvmftestfini 00:12:09.411 05:10:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:09.411 05:10:59 -- nvmf/common.sh@116 -- # sync 00:12:09.670 05:10:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:09.670 05:10:59 -- nvmf/common.sh@119 -- # set +e 00:12:09.670 05:10:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:09.670 05:10:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:09.670 rmmod nvme_tcp 00:12:09.670 rmmod nvme_fabrics 00:12:09.670 rmmod nvme_keyring 00:12:09.670 05:10:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:09.670 05:10:59 -- nvmf/common.sh@123 -- # set -e 00:12:09.670 05:10:59 -- nvmf/common.sh@124 -- # return 0 00:12:09.670 05:10:59 -- nvmf/common.sh@477 -- # '[' -n 76343 ']' 00:12:09.670 05:10:59 -- nvmf/common.sh@478 -- # killprocess 76343 00:12:09.670 05:10:59 -- common/autotest_common.sh@936 -- # '[' -z 76343 ']' 00:12:09.670 05:10:59 -- common/autotest_common.sh@940 -- # kill -0 76343 00:12:09.670 05:10:59 -- common/autotest_common.sh@941 -- # uname 00:12:09.670 05:10:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:09.670 05:10:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76343 00:12:09.670 05:10:59 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:12:09.670 05:10:59 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:12:09.670 05:10:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76343' 00:12:09.670 killing process with pid 76343 00:12:09.670 05:10:59 -- common/autotest_common.sh@955 -- # kill 76343 00:12:09.670 05:10:59 -- common/autotest_common.sh@960 -- # wait 76343 00:12:09.670 05:10:59 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:09.670 05:10:59 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:09.670 05:10:59 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:09.670 05:10:59 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:09.670 05:10:59 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:09.670 05:10:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:09.670 05:10:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:09.670 05:10:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:09.928 05:10:59 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:09.928 00:12:09.928 real 0m2.171s 00:12:09.928 user 0m6.318s 00:12:09.928 sys 0m0.741s 00:12:09.928 05:10:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:09.928 05:10:59 -- common/autotest_common.sh@10 -- # set +x 00:12:09.928 ************************************ 00:12:09.928 END TEST nvmf_bdevio 00:12:09.928 ************************************ 00:12:09.928 05:10:59 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:12:09.928 05:10:59 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:12:09.928 05:10:59 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:12:09.928 05:10:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:09.928 05:10:59 -- common/autotest_common.sh@10 -- # set +x 00:12:09.928 ************************************ 00:12:09.928 START TEST nvmf_bdevio_no_huge 00:12:09.928 ************************************ 00:12:09.928 05:10:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:12:09.928 * Looking for test storage... 00:12:09.928 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:09.928 05:10:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:09.928 05:10:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:09.928 05:10:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:09.928 05:10:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:09.928 05:10:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:09.928 05:10:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:09.928 05:10:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:09.928 05:10:59 -- scripts/common.sh@335 -- # IFS=.-: 00:12:09.928 05:10:59 -- scripts/common.sh@335 -- # read -ra ver1 00:12:09.928 05:10:59 -- scripts/common.sh@336 -- # IFS=.-: 00:12:09.928 05:10:59 -- scripts/common.sh@336 -- # read -ra ver2 00:12:09.928 05:10:59 -- scripts/common.sh@337 -- # local 'op=<' 00:12:09.928 05:10:59 -- scripts/common.sh@339 -- # ver1_l=2 00:12:09.928 05:10:59 -- scripts/common.sh@340 -- # ver2_l=1 00:12:09.928 05:10:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:09.928 05:10:59 -- scripts/common.sh@343 -- # case "$op" in 00:12:09.928 05:10:59 -- scripts/common.sh@344 -- # : 1 00:12:09.928 05:10:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:09.928 05:10:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:09.928 05:10:59 -- scripts/common.sh@364 -- # decimal 1 00:12:09.928 05:10:59 -- scripts/common.sh@352 -- # local d=1 00:12:09.928 05:10:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:09.928 05:10:59 -- scripts/common.sh@354 -- # echo 1 00:12:09.928 05:10:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:09.928 05:10:59 -- scripts/common.sh@365 -- # decimal 2 00:12:09.928 05:10:59 -- scripts/common.sh@352 -- # local d=2 00:12:09.928 05:10:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:09.928 05:10:59 -- scripts/common.sh@354 -- # echo 2 00:12:09.928 05:10:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:09.928 05:10:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:09.928 05:10:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:09.928 05:10:59 -- scripts/common.sh@367 -- # return 0 00:12:09.928 05:10:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:09.928 05:10:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:09.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.928 --rc genhtml_branch_coverage=1 00:12:09.928 --rc genhtml_function_coverage=1 00:12:09.928 --rc genhtml_legend=1 00:12:09.928 --rc geninfo_all_blocks=1 00:12:09.928 --rc geninfo_unexecuted_blocks=1 00:12:09.928 00:12:09.928 ' 00:12:09.928 05:10:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:09.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.929 --rc genhtml_branch_coverage=1 00:12:09.929 --rc genhtml_function_coverage=1 00:12:09.929 --rc genhtml_legend=1 00:12:09.929 --rc geninfo_all_blocks=1 00:12:09.929 --rc geninfo_unexecuted_blocks=1 00:12:09.929 00:12:09.929 ' 00:12:09.929 05:10:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:09.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.929 --rc genhtml_branch_coverage=1 00:12:09.929 --rc genhtml_function_coverage=1 00:12:09.929 --rc genhtml_legend=1 00:12:09.929 --rc geninfo_all_blocks=1 00:12:09.929 --rc geninfo_unexecuted_blocks=1 00:12:09.929 00:12:09.929 ' 00:12:09.929 05:10:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:09.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.929 --rc genhtml_branch_coverage=1 00:12:09.929 --rc genhtml_function_coverage=1 00:12:09.929 --rc genhtml_legend=1 00:12:09.929 --rc geninfo_all_blocks=1 00:12:09.929 --rc geninfo_unexecuted_blocks=1 00:12:09.929 00:12:09.929 ' 00:12:10.245 05:10:59 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:10.245 05:10:59 -- nvmf/common.sh@7 -- # uname -s 00:12:10.245 05:10:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:10.245 05:10:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:10.245 05:10:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:10.245 05:10:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:10.245 05:10:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:10.245 05:10:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:10.245 05:10:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:10.245 05:10:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:10.245 05:10:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:10.245 05:10:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:10.245 05:10:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:12:10.245 05:10:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:12:10.245 05:10:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:10.245 05:10:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:10.245 05:10:59 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:10.245 05:10:59 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:10.245 05:10:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:10.245 05:10:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:10.245 05:10:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:10.245 05:10:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.245 05:10:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.245 05:10:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.245 05:10:59 -- paths/export.sh@5 -- # export PATH 00:12:10.245 05:10:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.245 05:10:59 -- nvmf/common.sh@46 -- # : 0 00:12:10.245 05:10:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:10.245 05:10:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:10.245 05:10:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:10.245 05:10:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:10.245 05:10:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:10.245 05:10:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:10.245 05:10:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:10.245 05:10:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:10.245 05:10:59 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:10.245 05:10:59 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:10.245 05:10:59 -- target/bdevio.sh@14 -- # nvmftestinit 00:12:10.245 05:10:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:10.245 05:10:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:10.245 05:10:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:10.245 05:10:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:10.245 05:10:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:10.245 05:10:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:10.245 05:10:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:10.245 05:10:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:10.245 05:10:59 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:10.245 05:10:59 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:10.245 05:10:59 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:10.245 05:10:59 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:10.245 05:10:59 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:10.245 05:10:59 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:10.245 05:10:59 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:10.245 05:10:59 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:10.245 05:10:59 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:10.245 05:10:59 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:10.245 05:10:59 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:10.245 05:10:59 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:10.245 05:10:59 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:10.245 05:10:59 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:10.245 05:10:59 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:10.245 05:10:59 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:10.245 05:10:59 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:10.245 05:10:59 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:10.245 05:10:59 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:10.246 05:10:59 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:10.246 Cannot find device "nvmf_tgt_br" 00:12:10.246 05:10:59 -- nvmf/common.sh@154 -- # true 00:12:10.246 05:10:59 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:10.246 Cannot find device "nvmf_tgt_br2" 00:12:10.246 05:10:59 -- nvmf/common.sh@155 -- # true 00:12:10.246 05:10:59 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:10.246 05:10:59 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:10.246 Cannot find device "nvmf_tgt_br" 00:12:10.246 05:10:59 -- nvmf/common.sh@157 -- # true 00:12:10.246 05:10:59 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:10.246 Cannot find device "nvmf_tgt_br2" 00:12:10.246 05:10:59 -- nvmf/common.sh@158 -- # true 00:12:10.246 05:10:59 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:10.246 05:10:59 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:10.246 05:10:59 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:10.246 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:10.246 05:10:59 -- nvmf/common.sh@161 -- # true 00:12:10.246 05:10:59 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:10.246 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:10.246 05:10:59 -- nvmf/common.sh@162 -- # true 00:12:10.246 05:10:59 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:10.246 05:10:59 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:10.246 05:10:59 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:10.246 05:10:59 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:10.246 05:10:59 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:10.246 05:10:59 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:10.246 05:10:59 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:10.246 05:10:59 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:10.246 05:11:00 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:10.246 05:11:00 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:10.246 05:11:00 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:10.246 05:11:00 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:10.246 05:11:00 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:10.246 05:11:00 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:10.504 05:11:00 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:10.504 05:11:00 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:10.504 05:11:00 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:10.504 05:11:00 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:10.504 05:11:00 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:10.504 05:11:00 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:10.504 05:11:00 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:10.504 05:11:00 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:10.504 05:11:00 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:10.504 05:11:00 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:10.504 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:10.504 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:12:10.504 00:12:10.504 --- 10.0.0.2 ping statistics --- 00:12:10.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:10.504 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:12:10.504 05:11:00 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:10.505 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:10.505 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:12:10.505 00:12:10.505 --- 10.0.0.3 ping statistics --- 00:12:10.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:10.505 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:12:10.505 05:11:00 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:10.505 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:10.505 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:12:10.505 00:12:10.505 --- 10.0.0.1 ping statistics --- 00:12:10.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:10.505 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:12:10.505 05:11:00 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:10.505 05:11:00 -- nvmf/common.sh@421 -- # return 0 00:12:10.505 05:11:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:10.505 05:11:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:10.505 05:11:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:10.505 05:11:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:10.505 05:11:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:10.505 05:11:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:10.505 05:11:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:10.505 05:11:00 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:10.505 05:11:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:10.505 05:11:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:10.505 05:11:00 -- common/autotest_common.sh@10 -- # set +x 00:12:10.505 05:11:00 -- nvmf/common.sh@469 -- # nvmfpid=76553 00:12:10.505 05:11:00 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:12:10.505 05:11:00 -- nvmf/common.sh@470 -- # waitforlisten 76553 00:12:10.505 05:11:00 -- common/autotest_common.sh@829 -- # '[' -z 76553 ']' 00:12:10.505 05:11:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:10.505 05:11:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:10.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:10.505 05:11:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:10.505 05:11:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:10.505 05:11:00 -- common/autotest_common.sh@10 -- # set +x 00:12:10.505 [2024-12-08 05:11:00.200821] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:10.505 [2024-12-08 05:11:00.200942] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:12:10.763 [2024-12-08 05:11:00.343277] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:10.763 [2024-12-08 05:11:00.458995] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:10.763 [2024-12-08 05:11:00.459175] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:10.763 [2024-12-08 05:11:00.459190] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:10.763 [2024-12-08 05:11:00.459200] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:10.763 [2024-12-08 05:11:00.459317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:12:10.763 [2024-12-08 05:11:00.459660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:12:10.763 [2024-12-08 05:11:00.459757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:12:10.763 [2024-12-08 05:11:00.459763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:11.696 05:11:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:11.696 05:11:01 -- common/autotest_common.sh@862 -- # return 0 00:12:11.696 05:11:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:11.696 05:11:01 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:11.696 05:11:01 -- common/autotest_common.sh@10 -- # set +x 00:12:11.696 05:11:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:11.696 05:11:01 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:11.696 05:11:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.696 05:11:01 -- common/autotest_common.sh@10 -- # set +x 00:12:11.696 [2024-12-08 05:11:01.375546] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:11.696 05:11:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.696 05:11:01 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:11.696 05:11:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.696 05:11:01 -- common/autotest_common.sh@10 -- # set +x 00:12:11.696 Malloc0 00:12:11.696 05:11:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.696 05:11:01 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:11.696 05:11:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.696 05:11:01 -- common/autotest_common.sh@10 -- # set +x 00:12:11.696 05:11:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.697 05:11:01 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:11.697 05:11:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.697 05:11:01 -- common/autotest_common.sh@10 -- # set +x 00:12:11.697 05:11:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.697 05:11:01 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:11.697 05:11:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.697 05:11:01 -- common/autotest_common.sh@10 -- # set +x 00:12:11.697 [2024-12-08 05:11:01.417841] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:11.697 05:11:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.697 05:11:01 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:12:11.697 05:11:01 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:11.697 05:11:01 -- nvmf/common.sh@520 -- # config=() 00:12:11.697 05:11:01 -- nvmf/common.sh@520 -- # local subsystem config 00:12:11.697 05:11:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:12:11.697 05:11:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:12:11.697 { 00:12:11.697 "params": { 00:12:11.697 "name": "Nvme$subsystem", 00:12:11.697 "trtype": "$TEST_TRANSPORT", 00:12:11.697 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:11.697 "adrfam": "ipv4", 00:12:11.697 "trsvcid": "$NVMF_PORT", 00:12:11.697 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:11.697 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:11.697 "hdgst": ${hdgst:-false}, 00:12:11.697 "ddgst": ${ddgst:-false} 00:12:11.697 }, 00:12:11.697 "method": "bdev_nvme_attach_controller" 00:12:11.697 } 00:12:11.697 EOF 00:12:11.697 )") 00:12:11.697 05:11:01 -- nvmf/common.sh@542 -- # cat 00:12:11.697 05:11:01 -- nvmf/common.sh@544 -- # jq . 00:12:11.697 05:11:01 -- nvmf/common.sh@545 -- # IFS=, 00:12:11.697 05:11:01 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:12:11.697 "params": { 00:12:11.697 "name": "Nvme1", 00:12:11.697 "trtype": "tcp", 00:12:11.697 "traddr": "10.0.0.2", 00:12:11.697 "adrfam": "ipv4", 00:12:11.697 "trsvcid": "4420", 00:12:11.697 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:11.697 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:11.697 "hdgst": false, 00:12:11.697 "ddgst": false 00:12:11.697 }, 00:12:11.697 "method": "bdev_nvme_attach_controller" 00:12:11.697 }' 00:12:11.697 [2024-12-08 05:11:01.477217] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:11.697 [2024-12-08 05:11:01.477347] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid76590 ] 00:12:11.953 [2024-12-08 05:11:01.632069] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:12.210 [2024-12-08 05:11:01.763020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:12.210 [2024-12-08 05:11:01.763146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:12.210 [2024-12-08 05:11:01.763435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.210 [2024-12-08 05:11:01.922033] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:12:12.210 [2024-12-08 05:11:01.922337] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:12:12.210 I/O targets: 00:12:12.210 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:12.210 00:12:12.210 00:12:12.210 CUnit - A unit testing framework for C - Version 2.1-3 00:12:12.210 http://cunit.sourceforge.net/ 00:12:12.210 00:12:12.210 00:12:12.210 Suite: bdevio tests on: Nvme1n1 00:12:12.210 Test: blockdev write read block ...passed 00:12:12.210 Test: blockdev write zeroes read block ...passed 00:12:12.210 Test: blockdev write zeroes read no split ...passed 00:12:12.210 Test: blockdev write zeroes read split ...passed 00:12:12.210 Test: blockdev write zeroes read split partial ...passed 00:12:12.210 Test: blockdev reset ...[2024-12-08 05:11:01.974225] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:12:12.210 [2024-12-08 05:11:01.974657] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1880260 (9): Bad file descriptor 00:12:12.210 [2024-12-08 05:11:01.989752] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:12.210 passed 00:12:12.210 Test: blockdev write read 8 blocks ...passed 00:12:12.210 Test: blockdev write read size > 128k ...passed 00:12:12.210 Test: blockdev write read invalid size ...passed 00:12:12.210 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:12.210 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:12.210 Test: blockdev write read max offset ...passed 00:12:12.468 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:12.468 Test: blockdev writev readv 8 blocks ...passed 00:12:12.468 Test: blockdev writev readv 30 x 1block ...passed 00:12:12.468 Test: blockdev writev readv block ...passed 00:12:12.468 Test: blockdev writev readv size > 128k ...passed 00:12:12.468 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:12.468 Test: blockdev comparev and writev ...[2024-12-08 05:11:02.001482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:12.468 [2024-12-08 05:11:02.001572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:12.468 [2024-12-08 05:11:02.001610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:12.468 [2024-12-08 05:11:02.001629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:12.468 [2024-12-08 05:11:02.002107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:12.468 [2024-12-08 05:11:02.002163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:12.468 [2024-12-08 05:11:02.002197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:12.468 [2024-12-08 05:11:02.002218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:12.468 [2024-12-08 05:11:02.002767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:12.468 [2024-12-08 05:11:02.002815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:12.468 [2024-12-08 05:11:02.002848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:12.468 [2024-12-08 05:11:02.002870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:12.468 [2024-12-08 05:11:02.003784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:12.468 [2024-12-08 05:11:02.003832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:12.468 [2024-12-08 05:11:02.003868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:12.468 [2024-12-08 05:11:02.003887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:12.468 passed 00:12:12.468 Test: blockdev nvme passthru rw ...passed 00:12:12.468 Test: blockdev nvme passthru vendor specific ...[2024-12-08 05:11:02.005064] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:12.469 [2024-12-08 05:11:02.005111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:12.469 [2024-12-08 05:11:02.005280] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:12.469 [2024-12-08 05:11:02.005321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:12.469 [2024-12-08 05:11:02.005464] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:12.469 [2024-12-08 05:11:02.005495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:12.469 passed 00:12:12.469 Test: blockdev nvme admin passthru ...[2024-12-08 05:11:02.005648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:12.469 [2024-12-08 05:11:02.005690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:12.469 passed 00:12:12.469 Test: blockdev copy ...passed 00:12:12.469 00:12:12.469 Run Summary: Type Total Ran Passed Failed Inactive 00:12:12.469 suites 1 1 n/a 0 0 00:12:12.469 tests 23 23 23 0 0 00:12:12.469 asserts 152 152 152 0 n/a 00:12:12.469 00:12:12.469 Elapsed time = 0.203 seconds 00:12:12.726 05:11:02 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:12.726 05:11:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.726 05:11:02 -- common/autotest_common.sh@10 -- # set +x 00:12:12.726 05:11:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.726 05:11:02 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:12.726 05:11:02 -- target/bdevio.sh@30 -- # nvmftestfini 00:12:12.726 05:11:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:12.726 05:11:02 -- nvmf/common.sh@116 -- # sync 00:12:12.984 05:11:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:12.984 05:11:02 -- nvmf/common.sh@119 -- # set +e 00:12:12.984 05:11:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:12.984 05:11:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:12.984 rmmod nvme_tcp 00:12:12.984 rmmod nvme_fabrics 00:12:12.984 rmmod nvme_keyring 00:12:12.984 05:11:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:12.984 05:11:02 -- nvmf/common.sh@123 -- # set -e 00:12:12.984 05:11:02 -- nvmf/common.sh@124 -- # return 0 00:12:12.984 05:11:02 -- nvmf/common.sh@477 -- # '[' -n 76553 ']' 00:12:12.984 05:11:02 -- nvmf/common.sh@478 -- # killprocess 76553 00:12:12.984 05:11:02 -- common/autotest_common.sh@936 -- # '[' -z 76553 ']' 00:12:12.984 05:11:02 -- common/autotest_common.sh@940 -- # kill -0 76553 00:12:12.984 05:11:02 -- common/autotest_common.sh@941 -- # uname 00:12:12.984 05:11:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:12.984 05:11:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76553 00:12:12.984 killing process with pid 76553 00:12:12.984 05:11:02 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:12:12.984 05:11:02 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:12:12.984 05:11:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76553' 00:12:12.984 05:11:02 -- common/autotest_common.sh@955 -- # kill 76553 00:12:12.984 05:11:02 -- common/autotest_common.sh@960 -- # wait 76553 00:12:13.549 05:11:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:13.549 05:11:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:13.549 05:11:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:13.549 05:11:03 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:13.549 05:11:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:13.549 05:11:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:13.549 05:11:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:13.549 05:11:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:13.549 05:11:03 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:13.549 00:12:13.549 real 0m3.625s 00:12:13.549 user 0m11.651s 00:12:13.549 sys 0m1.482s 00:12:13.549 05:11:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:13.549 05:11:03 -- common/autotest_common.sh@10 -- # set +x 00:12:13.549 ************************************ 00:12:13.549 END TEST nvmf_bdevio_no_huge 00:12:13.549 ************************************ 00:12:13.549 05:11:03 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:12:13.549 05:11:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:13.549 05:11:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:13.549 05:11:03 -- common/autotest_common.sh@10 -- # set +x 00:12:13.549 ************************************ 00:12:13.549 START TEST nvmf_tls 00:12:13.549 ************************************ 00:12:13.549 05:11:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:12:13.549 * Looking for test storage... 00:12:13.549 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:13.549 05:11:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:13.549 05:11:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:13.549 05:11:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:13.808 05:11:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:13.808 05:11:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:13.808 05:11:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:13.808 05:11:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:13.808 05:11:03 -- scripts/common.sh@335 -- # IFS=.-: 00:12:13.808 05:11:03 -- scripts/common.sh@335 -- # read -ra ver1 00:12:13.808 05:11:03 -- scripts/common.sh@336 -- # IFS=.-: 00:12:13.808 05:11:03 -- scripts/common.sh@336 -- # read -ra ver2 00:12:13.808 05:11:03 -- scripts/common.sh@337 -- # local 'op=<' 00:12:13.808 05:11:03 -- scripts/common.sh@339 -- # ver1_l=2 00:12:13.808 05:11:03 -- scripts/common.sh@340 -- # ver2_l=1 00:12:13.808 05:11:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:13.808 05:11:03 -- scripts/common.sh@343 -- # case "$op" in 00:12:13.808 05:11:03 -- scripts/common.sh@344 -- # : 1 00:12:13.808 05:11:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:13.808 05:11:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:13.808 05:11:03 -- scripts/common.sh@364 -- # decimal 1 00:12:13.808 05:11:03 -- scripts/common.sh@352 -- # local d=1 00:12:13.808 05:11:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:13.808 05:11:03 -- scripts/common.sh@354 -- # echo 1 00:12:13.808 05:11:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:13.808 05:11:03 -- scripts/common.sh@365 -- # decimal 2 00:12:13.808 05:11:03 -- scripts/common.sh@352 -- # local d=2 00:12:13.808 05:11:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:13.808 05:11:03 -- scripts/common.sh@354 -- # echo 2 00:12:13.808 05:11:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:13.808 05:11:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:13.808 05:11:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:13.808 05:11:03 -- scripts/common.sh@367 -- # return 0 00:12:13.808 05:11:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:13.808 05:11:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:13.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.808 --rc genhtml_branch_coverage=1 00:12:13.808 --rc genhtml_function_coverage=1 00:12:13.808 --rc genhtml_legend=1 00:12:13.808 --rc geninfo_all_blocks=1 00:12:13.808 --rc geninfo_unexecuted_blocks=1 00:12:13.808 00:12:13.808 ' 00:12:13.808 05:11:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:13.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.808 --rc genhtml_branch_coverage=1 00:12:13.808 --rc genhtml_function_coverage=1 00:12:13.808 --rc genhtml_legend=1 00:12:13.808 --rc geninfo_all_blocks=1 00:12:13.808 --rc geninfo_unexecuted_blocks=1 00:12:13.808 00:12:13.808 ' 00:12:13.808 05:11:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:13.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.808 --rc genhtml_branch_coverage=1 00:12:13.808 --rc genhtml_function_coverage=1 00:12:13.808 --rc genhtml_legend=1 00:12:13.808 --rc geninfo_all_blocks=1 00:12:13.808 --rc geninfo_unexecuted_blocks=1 00:12:13.808 00:12:13.808 ' 00:12:13.808 05:11:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:13.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.808 --rc genhtml_branch_coverage=1 00:12:13.808 --rc genhtml_function_coverage=1 00:12:13.808 --rc genhtml_legend=1 00:12:13.808 --rc geninfo_all_blocks=1 00:12:13.808 --rc geninfo_unexecuted_blocks=1 00:12:13.808 00:12:13.808 ' 00:12:13.808 05:11:03 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:13.808 05:11:03 -- nvmf/common.sh@7 -- # uname -s 00:12:13.808 05:11:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:13.808 05:11:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:13.808 05:11:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:13.808 05:11:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:13.808 05:11:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:13.808 05:11:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:13.808 05:11:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:13.808 05:11:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:13.808 05:11:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:13.808 05:11:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:13.808 05:11:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:12:13.808 05:11:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:12:13.808 05:11:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:13.808 05:11:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:13.808 05:11:03 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:13.808 05:11:03 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:13.808 05:11:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:13.808 05:11:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:13.808 05:11:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:13.808 05:11:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.808 05:11:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.808 05:11:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.808 05:11:03 -- paths/export.sh@5 -- # export PATH 00:12:13.809 05:11:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.809 05:11:03 -- nvmf/common.sh@46 -- # : 0 00:12:13.809 05:11:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:13.809 05:11:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:13.809 05:11:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:13.809 05:11:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:13.809 05:11:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:13.809 05:11:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:13.809 05:11:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:13.809 05:11:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:13.809 05:11:03 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:13.809 05:11:03 -- target/tls.sh@71 -- # nvmftestinit 00:12:13.809 05:11:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:13.809 05:11:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:13.809 05:11:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:13.809 05:11:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:13.809 05:11:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:13.809 05:11:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:13.809 05:11:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:13.809 05:11:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:13.809 05:11:03 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:13.809 05:11:03 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:13.809 05:11:03 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:13.809 05:11:03 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:13.809 05:11:03 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:13.809 05:11:03 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:13.809 05:11:03 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:13.809 05:11:03 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:13.809 05:11:03 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:13.809 05:11:03 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:13.809 05:11:03 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:13.809 05:11:03 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:13.809 05:11:03 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:13.809 05:11:03 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:13.809 05:11:03 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:13.809 05:11:03 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:13.809 05:11:03 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:13.809 05:11:03 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:13.809 05:11:03 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:13.809 05:11:03 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:13.809 Cannot find device "nvmf_tgt_br" 00:12:13.809 05:11:03 -- nvmf/common.sh@154 -- # true 00:12:13.809 05:11:03 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:13.809 Cannot find device "nvmf_tgt_br2" 00:12:13.809 05:11:03 -- nvmf/common.sh@155 -- # true 00:12:13.809 05:11:03 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:13.809 05:11:03 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:13.809 Cannot find device "nvmf_tgt_br" 00:12:13.809 05:11:03 -- nvmf/common.sh@157 -- # true 00:12:13.809 05:11:03 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:13.809 Cannot find device "nvmf_tgt_br2" 00:12:13.809 05:11:03 -- nvmf/common.sh@158 -- # true 00:12:13.809 05:11:03 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:13.809 05:11:03 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:13.809 05:11:03 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:13.809 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:13.809 05:11:03 -- nvmf/common.sh@161 -- # true 00:12:13.809 05:11:03 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:13.809 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:13.809 05:11:03 -- nvmf/common.sh@162 -- # true 00:12:13.809 05:11:03 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:13.809 05:11:03 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:13.809 05:11:03 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:13.809 05:11:03 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:14.068 05:11:03 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:14.068 05:11:03 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:14.068 05:11:03 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:14.068 05:11:03 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:14.068 05:11:03 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:14.068 05:11:03 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:14.068 05:11:03 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:14.068 05:11:03 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:14.068 05:11:03 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:14.068 05:11:03 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:14.068 05:11:03 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:14.068 05:11:03 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:14.068 05:11:03 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:14.068 05:11:03 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:14.068 05:11:03 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:14.068 05:11:03 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:14.068 05:11:03 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:14.068 05:11:03 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:14.068 05:11:03 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:14.068 05:11:03 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:14.335 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:14.335 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:12:14.335 00:12:14.335 --- 10.0.0.2 ping statistics --- 00:12:14.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.335 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:12:14.335 05:11:03 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:14.335 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:14.335 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:12:14.335 00:12:14.335 --- 10.0.0.3 ping statistics --- 00:12:14.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.335 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:12:14.335 05:11:03 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:14.335 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:14.335 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:12:14.335 00:12:14.335 --- 10.0.0.1 ping statistics --- 00:12:14.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.335 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:12:14.335 05:11:03 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:14.335 05:11:03 -- nvmf/common.sh@421 -- # return 0 00:12:14.335 05:11:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:14.335 05:11:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:14.335 05:11:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:14.335 05:11:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:14.335 05:11:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:14.335 05:11:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:14.335 05:11:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:14.335 05:11:03 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:12:14.335 05:11:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:14.335 05:11:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:14.335 05:11:03 -- common/autotest_common.sh@10 -- # set +x 00:12:14.335 05:11:03 -- nvmf/common.sh@469 -- # nvmfpid=76779 00:12:14.335 05:11:03 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:12:14.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:14.335 05:11:03 -- nvmf/common.sh@470 -- # waitforlisten 76779 00:12:14.335 05:11:03 -- common/autotest_common.sh@829 -- # '[' -z 76779 ']' 00:12:14.335 05:11:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:14.335 05:11:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:14.335 05:11:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:14.335 05:11:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:14.335 05:11:03 -- common/autotest_common.sh@10 -- # set +x 00:12:14.335 [2024-12-08 05:11:03.964800] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:14.335 [2024-12-08 05:11:03.964923] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:14.335 [2024-12-08 05:11:04.111306] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.594 [2024-12-08 05:11:04.154471] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:14.594 [2024-12-08 05:11:04.154727] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:14.594 [2024-12-08 05:11:04.154773] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:14.594 [2024-12-08 05:11:04.154789] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:14.594 [2024-12-08 05:11:04.154837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:14.594 05:11:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:14.594 05:11:04 -- common/autotest_common.sh@862 -- # return 0 00:12:14.594 05:11:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:14.594 05:11:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:14.594 05:11:04 -- common/autotest_common.sh@10 -- # set +x 00:12:14.594 05:11:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:14.594 05:11:04 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:12:14.594 05:11:04 -- target/tls.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:12:15.194 true 00:12:15.194 05:11:04 -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:15.194 05:11:04 -- target/tls.sh@82 -- # jq -r .tls_version 00:12:15.470 05:11:05 -- target/tls.sh@82 -- # version=0 00:12:15.470 05:11:05 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:12:15.470 05:11:05 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:12:15.728 05:11:05 -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:15.728 05:11:05 -- target/tls.sh@90 -- # jq -r .tls_version 00:12:15.985 05:11:05 -- target/tls.sh@90 -- # version=13 00:12:15.985 05:11:05 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:12:15.985 05:11:05 -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:12:16.552 05:11:06 -- target/tls.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:16.552 05:11:06 -- target/tls.sh@98 -- # jq -r .tls_version 00:12:16.832 05:11:06 -- target/tls.sh@98 -- # version=7 00:12:16.832 05:11:06 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:12:16.832 05:11:06 -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:16.832 05:11:06 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:12:17.091 05:11:06 -- target/tls.sh@105 -- # ktls=false 00:12:17.091 05:11:06 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:12:17.091 05:11:06 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:12:17.658 05:11:07 -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:17.658 05:11:07 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:12:17.915 05:11:07 -- target/tls.sh@113 -- # ktls=true 00:12:17.915 05:11:07 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:12:17.915 05:11:07 -- target/tls.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:12:18.480 05:11:07 -- target/tls.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:18.480 05:11:07 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:12:18.738 05:11:08 -- target/tls.sh@121 -- # ktls=false 00:12:18.738 05:11:08 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:12:18.738 05:11:08 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:12:18.738 05:11:08 -- target/tls.sh@49 -- # local key hash crc 00:12:18.738 05:11:08 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:12:18.738 05:11:08 -- target/tls.sh@51 -- # hash=01 00:12:18.738 05:11:08 -- target/tls.sh@52 -- # gzip -1 -c 00:12:18.738 05:11:08 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:12:18.738 05:11:08 -- target/tls.sh@52 -- # head -c 4 00:12:18.738 05:11:08 -- target/tls.sh@52 -- # tail -c8 00:12:18.738 05:11:08 -- target/tls.sh@52 -- # crc='p$H�' 00:12:18.738 05:11:08 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:12:18.738 05:11:08 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:12:18.738 05:11:08 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:12:18.738 05:11:08 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:12:18.738 05:11:08 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:12:18.738 05:11:08 -- target/tls.sh@49 -- # local key hash crc 00:12:18.738 05:11:08 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:12:18.738 05:11:08 -- target/tls.sh@51 -- # hash=01 00:12:18.738 05:11:08 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:12:18.738 05:11:08 -- target/tls.sh@52 -- # gzip -1 -c 00:12:18.738 05:11:08 -- target/tls.sh@52 -- # head -c 4 00:12:18.738 05:11:08 -- target/tls.sh@52 -- # tail -c8 00:12:18.738 05:11:08 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:12:18.738 05:11:08 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:12:18.738 05:11:08 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:12:18.738 05:11:08 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:12:18.738 05:11:08 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:12:18.738 05:11:08 -- target/tls.sh@130 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:18.738 05:11:08 -- target/tls.sh@131 -- # key_2_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:12:18.738 05:11:08 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:12:18.738 05:11:08 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:12:18.738 05:11:08 -- target/tls.sh@136 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:18.738 05:11:08 -- target/tls.sh@137 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:12:18.738 05:11:08 -- target/tls.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:12:18.996 05:11:08 -- target/tls.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:12:19.561 05:11:09 -- target/tls.sh@142 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:19.561 05:11:09 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:19.561 05:11:09 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:19.818 [2024-12-08 05:11:09.451042] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:19.818 05:11:09 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:20.383 05:11:09 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:12:20.641 [2024-12-08 05:11:10.235233] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:20.641 [2024-12-08 05:11:10.235564] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:20.641 05:11:10 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:20.899 malloc0 00:12:20.899 05:11:10 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:21.467 05:11:10 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:21.726 05:11:11 -- target/tls.sh@146 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:31.757 Initializing NVMe Controllers 00:12:31.757 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:31.757 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:31.757 Initialization complete. Launching workers. 00:12:31.757 ======================================================== 00:12:31.757 Latency(us) 00:12:31.757 Device Information : IOPS MiB/s Average min max 00:12:31.757 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9759.96 38.12 6558.76 1881.91 11486.26 00:12:31.757 ======================================================== 00:12:31.757 Total : 9759.96 38.12 6558.76 1881.91 11486.26 00:12:31.757 00:12:31.757 05:11:21 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:31.757 05:11:21 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:31.757 05:11:21 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:31.757 05:11:21 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:31.757 05:11:21 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:12:31.757 05:11:21 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:31.757 05:11:21 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:31.757 05:11:21 -- target/tls.sh@28 -- # bdevperf_pid=77044 00:12:31.757 05:11:21 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:31.757 05:11:21 -- target/tls.sh@31 -- # waitforlisten 77044 /var/tmp/bdevperf.sock 00:12:31.758 05:11:21 -- common/autotest_common.sh@829 -- # '[' -z 77044 ']' 00:12:31.758 05:11:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:31.758 05:11:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:31.758 05:11:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:31.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:31.758 05:11:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:31.758 05:11:21 -- common/autotest_common.sh@10 -- # set +x 00:12:31.758 [2024-12-08 05:11:21.490920] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:31.758 [2024-12-08 05:11:21.491200] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77044 ] 00:12:32.016 [2024-12-08 05:11:21.626254] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.016 [2024-12-08 05:11:21.670771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:32.016 05:11:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:32.016 05:11:21 -- common/autotest_common.sh@862 -- # return 0 00:12:32.016 05:11:21 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:32.274 [2024-12-08 05:11:22.002075] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:32.533 TLSTESTn1 00:12:32.533 05:11:22 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:12:32.533 Running I/O for 10 seconds... 00:12:42.501 00:12:42.501 Latency(us) 00:12:42.501 [2024-12-08T05:11:32.287Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:42.501 [2024-12-08T05:11:32.287Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:12:42.501 Verification LBA range: start 0x0 length 0x2000 00:12:42.501 TLSTESTn1 : 10.01 5431.35 21.22 0.00 0.00 23529.02 5093.93 31933.91 00:12:42.501 [2024-12-08T05:11:32.287Z] =================================================================================================================== 00:12:42.501 [2024-12-08T05:11:32.287Z] Total : 5431.35 21.22 0.00 0.00 23529.02 5093.93 31933.91 00:12:42.501 0 00:12:42.758 05:11:32 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:42.758 05:11:32 -- target/tls.sh@45 -- # killprocess 77044 00:12:42.758 05:11:32 -- common/autotest_common.sh@936 -- # '[' -z 77044 ']' 00:12:42.758 05:11:32 -- common/autotest_common.sh@940 -- # kill -0 77044 00:12:42.758 05:11:32 -- common/autotest_common.sh@941 -- # uname 00:12:42.758 05:11:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:42.759 05:11:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77044 00:12:42.759 killing process with pid 77044 00:12:42.759 Received shutdown signal, test time was about 10.000000 seconds 00:12:42.759 00:12:42.759 Latency(us) 00:12:42.759 [2024-12-08T05:11:32.545Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:42.759 [2024-12-08T05:11:32.545Z] =================================================================================================================== 00:12:42.759 [2024-12-08T05:11:32.545Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:42.759 05:11:32 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:42.759 05:11:32 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:42.759 05:11:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77044' 00:12:42.759 05:11:32 -- common/autotest_common.sh@955 -- # kill 77044 00:12:42.759 05:11:32 -- common/autotest_common.sh@960 -- # wait 77044 00:12:42.759 05:11:32 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:12:42.759 05:11:32 -- common/autotest_common.sh@650 -- # local es=0 00:12:42.759 05:11:32 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:12:42.759 05:11:32 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:12:42.759 05:11:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:42.759 05:11:32 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:12:42.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:42.759 05:11:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:42.759 05:11:32 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:12:42.759 05:11:32 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:42.759 05:11:32 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:42.759 05:11:32 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:42.759 05:11:32 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt' 00:12:42.759 05:11:32 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:42.759 05:11:32 -- target/tls.sh@28 -- # bdevperf_pid=77169 00:12:42.759 05:11:32 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:42.759 05:11:32 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:42.759 05:11:32 -- target/tls.sh@31 -- # waitforlisten 77169 /var/tmp/bdevperf.sock 00:12:42.759 05:11:32 -- common/autotest_common.sh@829 -- # '[' -z 77169 ']' 00:12:42.759 05:11:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:42.759 05:11:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:42.759 05:11:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:42.759 05:11:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:42.759 05:11:32 -- common/autotest_common.sh@10 -- # set +x 00:12:42.759 [2024-12-08 05:11:32.524831] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:42.759 [2024-12-08 05:11:32.525122] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77169 ] 00:12:43.016 [2024-12-08 05:11:32.662561] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.016 [2024-12-08 05:11:32.699992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:43.016 05:11:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:43.016 05:11:32 -- common/autotest_common.sh@862 -- # return 0 00:12:43.016 05:11:32 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:12:43.273 [2024-12-08 05:11:33.053551] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:43.532 [2024-12-08 05:11:33.063581] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spd[2024-12-08 05:11:33.063619] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19d5f90 (107): Transport endpoint is not connected 00:12:43.532 k_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:43.532 [2024-12-08 05:11:33.064614] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19d5f90 (9): Bad file descriptor 00:12:43.532 [2024-12-08 05:11:33.065607] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:12:43.532 [2024-12-08 05:11:33.065897] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:12:43.532 [2024-12-08 05:11:33.066195] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:12:43.532 request: 00:12:43.532 { 00:12:43.532 "name": "TLSTEST", 00:12:43.532 "trtype": "tcp", 00:12:43.532 "traddr": "10.0.0.2", 00:12:43.532 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:43.532 "adrfam": "ipv4", 00:12:43.532 "trsvcid": "4420", 00:12:43.532 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:43.532 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt", 00:12:43.532 "method": "bdev_nvme_attach_controller", 00:12:43.532 "req_id": 1 00:12:43.532 } 00:12:43.532 Got JSON-RPC error response 00:12:43.532 response: 00:12:43.532 { 00:12:43.532 "code": -32602, 00:12:43.532 "message": "Invalid parameters" 00:12:43.532 } 00:12:43.532 05:11:33 -- target/tls.sh@36 -- # killprocess 77169 00:12:43.532 05:11:33 -- common/autotest_common.sh@936 -- # '[' -z 77169 ']' 00:12:43.532 05:11:33 -- common/autotest_common.sh@940 -- # kill -0 77169 00:12:43.532 05:11:33 -- common/autotest_common.sh@941 -- # uname 00:12:43.532 05:11:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:43.532 05:11:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77169 00:12:43.532 killing process with pid 77169 00:12:43.532 Received shutdown signal, test time was about 10.000000 seconds 00:12:43.532 00:12:43.532 Latency(us) 00:12:43.532 [2024-12-08T05:11:33.318Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:43.532 [2024-12-08T05:11:33.318Z] =================================================================================================================== 00:12:43.532 [2024-12-08T05:11:33.318Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:43.532 05:11:33 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:43.532 05:11:33 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:43.532 05:11:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77169' 00:12:43.532 05:11:33 -- common/autotest_common.sh@955 -- # kill 77169 00:12:43.532 05:11:33 -- common/autotest_common.sh@960 -- # wait 77169 00:12:43.532 05:11:33 -- target/tls.sh@37 -- # return 1 00:12:43.532 05:11:33 -- common/autotest_common.sh@653 -- # es=1 00:12:43.532 05:11:33 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:43.532 05:11:33 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:43.532 05:11:33 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:43.532 05:11:33 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:43.532 05:11:33 -- common/autotest_common.sh@650 -- # local es=0 00:12:43.532 05:11:33 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:43.532 05:11:33 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:12:43.532 05:11:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:43.532 05:11:33 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:12:43.532 05:11:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:43.532 05:11:33 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:43.532 05:11:33 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:43.532 05:11:33 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:43.532 05:11:33 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:12:43.532 05:11:33 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:12:43.532 05:11:33 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:43.532 05:11:33 -- target/tls.sh@28 -- # bdevperf_pid=77180 00:12:43.532 05:11:33 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:43.532 05:11:33 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:43.532 05:11:33 -- target/tls.sh@31 -- # waitforlisten 77180 /var/tmp/bdevperf.sock 00:12:43.532 05:11:33 -- common/autotest_common.sh@829 -- # '[' -z 77180 ']' 00:12:43.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:43.532 05:11:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:43.532 05:11:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:43.532 05:11:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:43.532 05:11:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:43.532 05:11:33 -- common/autotest_common.sh@10 -- # set +x 00:12:43.790 [2024-12-08 05:11:33.340392] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:43.790 [2024-12-08 05:11:33.340536] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77180 ] 00:12:43.790 [2024-12-08 05:11:33.497526] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.790 [2024-12-08 05:11:33.537306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:44.725 05:11:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:44.725 05:11:34 -- common/autotest_common.sh@862 -- # return 0 00:12:44.725 05:11:34 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:44.983 [2024-12-08 05:11:34.728409] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:44.983 [2024-12-08 05:11:34.733746] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:12:44.983 [2024-12-08 05:11:34.733924] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:12:44.983 [2024-12-08 05:11:34.733985] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:44.983 [2024-12-08 05:11:34.734291] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1af2f90 (107): Transport endpoint is not connected 00:12:44.983 [2024-12-08 05:11:34.735272] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1af2f90 (9): Bad file descriptor 00:12:44.983 [2024-12-08 05:11:34.736267] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:12:44.983 [2024-12-08 05:11:34.736360] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:12:44.983 [2024-12-08 05:11:34.736430] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:12:44.983 request: 00:12:44.983 { 00:12:44.983 "name": "TLSTEST", 00:12:44.983 "trtype": "tcp", 00:12:44.983 "traddr": "10.0.0.2", 00:12:44.983 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:12:44.983 "adrfam": "ipv4", 00:12:44.983 "trsvcid": "4420", 00:12:44.983 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:44.983 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt", 00:12:44.983 "method": "bdev_nvme_attach_controller", 00:12:44.983 "req_id": 1 00:12:44.983 } 00:12:44.983 Got JSON-RPC error response 00:12:44.983 response: 00:12:44.983 { 00:12:44.983 "code": -32602, 00:12:44.983 "message": "Invalid parameters" 00:12:44.983 } 00:12:44.983 05:11:34 -- target/tls.sh@36 -- # killprocess 77180 00:12:44.983 05:11:34 -- common/autotest_common.sh@936 -- # '[' -z 77180 ']' 00:12:44.983 05:11:34 -- common/autotest_common.sh@940 -- # kill -0 77180 00:12:44.983 05:11:34 -- common/autotest_common.sh@941 -- # uname 00:12:45.241 05:11:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:45.241 05:11:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77180 00:12:45.241 05:11:34 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:45.241 05:11:34 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:45.241 05:11:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77180' 00:12:45.241 killing process with pid 77180 00:12:45.241 05:11:34 -- common/autotest_common.sh@955 -- # kill 77180 00:12:45.241 Received shutdown signal, test time was about 10.000000 seconds 00:12:45.241 00:12:45.241 Latency(us) 00:12:45.241 [2024-12-08T05:11:35.027Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:45.241 [2024-12-08T05:11:35.027Z] =================================================================================================================== 00:12:45.241 [2024-12-08T05:11:35.027Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:45.241 05:11:34 -- common/autotest_common.sh@960 -- # wait 77180 00:12:45.241 05:11:34 -- target/tls.sh@37 -- # return 1 00:12:45.241 05:11:34 -- common/autotest_common.sh@653 -- # es=1 00:12:45.241 05:11:34 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:45.241 05:11:34 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:45.241 05:11:34 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:45.241 05:11:34 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:45.241 05:11:34 -- common/autotest_common.sh@650 -- # local es=0 00:12:45.241 05:11:34 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:45.241 05:11:34 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:12:45.242 05:11:34 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:45.242 05:11:34 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:12:45.242 05:11:34 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:45.242 05:11:34 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:45.242 05:11:34 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:45.242 05:11:34 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:12:45.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:45.242 05:11:34 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:45.242 05:11:34 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:12:45.242 05:11:34 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:45.242 05:11:34 -- target/tls.sh@28 -- # bdevperf_pid=77213 00:12:45.242 05:11:34 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:45.242 05:11:34 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:45.242 05:11:34 -- target/tls.sh@31 -- # waitforlisten 77213 /var/tmp/bdevperf.sock 00:12:45.242 05:11:34 -- common/autotest_common.sh@829 -- # '[' -z 77213 ']' 00:12:45.242 05:11:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:45.242 05:11:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:45.242 05:11:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:45.242 05:11:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:45.242 05:11:34 -- common/autotest_common.sh@10 -- # set +x 00:12:45.242 [2024-12-08 05:11:35.004312] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:45.242 [2024-12-08 05:11:35.004604] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77213 ] 00:12:45.499 [2024-12-08 05:11:35.137369] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.499 [2024-12-08 05:11:35.179520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:45.499 05:11:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:45.499 05:11:35 -- common/autotest_common.sh@862 -- # return 0 00:12:45.499 05:11:35 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:45.757 [2024-12-08 05:11:35.497594] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:45.757 [2024-12-08 05:11:35.503852] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:12:45.757 [2024-12-08 05:11:35.504069] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:12:45.757 [2024-12-08 05:11:35.504311] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spd[2024-12-08 05:11:35.504427] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x905f90 (107): Transport endpoint is not connected 00:12:45.757 k_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:45.757 [2024-12-08 05:11:35.505408] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x905f90 (9): Bad file descriptor 00:12:45.757 [2024-12-08 05:11:35.506404] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:12:45.757 [2024-12-08 05:11:35.506434] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:12:45.757 [2024-12-08 05:11:35.506446] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:12:45.757 request: 00:12:45.757 { 00:12:45.757 "name": "TLSTEST", 00:12:45.757 "trtype": "tcp", 00:12:45.757 "traddr": "10.0.0.2", 00:12:45.757 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:45.757 "adrfam": "ipv4", 00:12:45.757 "trsvcid": "4420", 00:12:45.757 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:12:45.757 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt", 00:12:45.757 "method": "bdev_nvme_attach_controller", 00:12:45.757 "req_id": 1 00:12:45.757 } 00:12:45.757 Got JSON-RPC error response 00:12:45.757 response: 00:12:45.757 { 00:12:45.757 "code": -32602, 00:12:45.757 "message": "Invalid parameters" 00:12:45.757 } 00:12:45.757 05:11:35 -- target/tls.sh@36 -- # killprocess 77213 00:12:45.757 05:11:35 -- common/autotest_common.sh@936 -- # '[' -z 77213 ']' 00:12:45.757 05:11:35 -- common/autotest_common.sh@940 -- # kill -0 77213 00:12:45.757 05:11:35 -- common/autotest_common.sh@941 -- # uname 00:12:45.757 05:11:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:45.757 05:11:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77213 00:12:46.015 killing process with pid 77213 00:12:46.015 Received shutdown signal, test time was about 10.000000 seconds 00:12:46.015 00:12:46.015 Latency(us) 00:12:46.015 [2024-12-08T05:11:35.801Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:46.015 [2024-12-08T05:11:35.801Z] =================================================================================================================== 00:12:46.015 [2024-12-08T05:11:35.801Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:46.015 05:11:35 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:46.015 05:11:35 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:46.015 05:11:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77213' 00:12:46.015 05:11:35 -- common/autotest_common.sh@955 -- # kill 77213 00:12:46.015 05:11:35 -- common/autotest_common.sh@960 -- # wait 77213 00:12:46.015 05:11:35 -- target/tls.sh@37 -- # return 1 00:12:46.015 05:11:35 -- common/autotest_common.sh@653 -- # es=1 00:12:46.015 05:11:35 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:46.015 05:11:35 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:46.015 05:11:35 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:46.015 05:11:35 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:12:46.015 05:11:35 -- common/autotest_common.sh@650 -- # local es=0 00:12:46.015 05:11:35 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:12:46.015 05:11:35 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:12:46.015 05:11:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:46.015 05:11:35 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:12:46.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:46.015 05:11:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:46.015 05:11:35 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:12:46.015 05:11:35 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:46.015 05:11:35 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:46.015 05:11:35 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:46.015 05:11:35 -- target/tls.sh@23 -- # psk= 00:12:46.015 05:11:35 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:46.015 05:11:35 -- target/tls.sh@28 -- # bdevperf_pid=77228 00:12:46.015 05:11:35 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:46.015 05:11:35 -- target/tls.sh@31 -- # waitforlisten 77228 /var/tmp/bdevperf.sock 00:12:46.015 05:11:35 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:46.015 05:11:35 -- common/autotest_common.sh@829 -- # '[' -z 77228 ']' 00:12:46.015 05:11:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:46.015 05:11:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:46.015 05:11:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:46.015 05:11:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:46.015 05:11:35 -- common/autotest_common.sh@10 -- # set +x 00:12:46.015 [2024-12-08 05:11:35.758361] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:46.015 [2024-12-08 05:11:35.758723] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77228 ] 00:12:46.273 [2024-12-08 05:11:35.893657] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:46.273 [2024-12-08 05:11:35.935854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:47.210 05:11:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:47.210 05:11:36 -- common/autotest_common.sh@862 -- # return 0 00:12:47.210 05:11:36 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:12:47.469 [2024-12-08 05:11:37.034116] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:47.469 [2024-12-08 05:11:37.035845] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe0c20 (9): Bad file descriptor 00:12:47.469 [2024-12-08 05:11:37.036839] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:12:47.469 [2024-12-08 05:11:37.037012] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:12:47.469 [2024-12-08 05:11:37.037185] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:12:47.469 request: 00:12:47.469 { 00:12:47.469 "name": "TLSTEST", 00:12:47.469 "trtype": "tcp", 00:12:47.469 "traddr": "10.0.0.2", 00:12:47.469 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:47.469 "adrfam": "ipv4", 00:12:47.469 "trsvcid": "4420", 00:12:47.469 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:47.469 "method": "bdev_nvme_attach_controller", 00:12:47.469 "req_id": 1 00:12:47.469 } 00:12:47.469 Got JSON-RPC error response 00:12:47.469 response: 00:12:47.469 { 00:12:47.469 "code": -32602, 00:12:47.469 "message": "Invalid parameters" 00:12:47.469 } 00:12:47.469 05:11:37 -- target/tls.sh@36 -- # killprocess 77228 00:12:47.469 05:11:37 -- common/autotest_common.sh@936 -- # '[' -z 77228 ']' 00:12:47.469 05:11:37 -- common/autotest_common.sh@940 -- # kill -0 77228 00:12:47.469 05:11:37 -- common/autotest_common.sh@941 -- # uname 00:12:47.469 05:11:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:47.469 05:11:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77228 00:12:47.469 killing process with pid 77228 00:12:47.469 Received shutdown signal, test time was about 10.000000 seconds 00:12:47.469 00:12:47.469 Latency(us) 00:12:47.469 [2024-12-08T05:11:37.255Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:47.469 [2024-12-08T05:11:37.255Z] =================================================================================================================== 00:12:47.469 [2024-12-08T05:11:37.255Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:47.469 05:11:37 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:47.469 05:11:37 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:47.469 05:11:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77228' 00:12:47.469 05:11:37 -- common/autotest_common.sh@955 -- # kill 77228 00:12:47.469 05:11:37 -- common/autotest_common.sh@960 -- # wait 77228 00:12:47.469 05:11:37 -- target/tls.sh@37 -- # return 1 00:12:47.469 05:11:37 -- common/autotest_common.sh@653 -- # es=1 00:12:47.469 05:11:37 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:47.469 05:11:37 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:47.469 05:11:37 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:47.469 05:11:37 -- target/tls.sh@167 -- # killprocess 76779 00:12:47.469 05:11:37 -- common/autotest_common.sh@936 -- # '[' -z 76779 ']' 00:12:47.469 05:11:37 -- common/autotest_common.sh@940 -- # kill -0 76779 00:12:47.469 05:11:37 -- common/autotest_common.sh@941 -- # uname 00:12:47.469 05:11:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:47.469 05:11:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76779 00:12:47.729 killing process with pid 76779 00:12:47.729 05:11:37 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:47.729 05:11:37 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:47.729 05:11:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76779' 00:12:47.729 05:11:37 -- common/autotest_common.sh@955 -- # kill 76779 00:12:47.729 05:11:37 -- common/autotest_common.sh@960 -- # wait 76779 00:12:47.729 05:11:37 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:12:47.729 05:11:37 -- target/tls.sh@49 -- # local key hash crc 00:12:47.729 05:11:37 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:12:47.729 05:11:37 -- target/tls.sh@51 -- # hash=02 00:12:47.729 05:11:37 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:12:47.729 05:11:37 -- target/tls.sh@52 -- # gzip -1 -c 00:12:47.729 05:11:37 -- target/tls.sh@52 -- # tail -c8 00:12:47.729 05:11:37 -- target/tls.sh@52 -- # head -c 4 00:12:47.729 05:11:37 -- target/tls.sh@52 -- # crc='�e�'\''' 00:12:47.729 05:11:37 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:12:47.729 05:11:37 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:12:47.729 05:11:37 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:12:47.729 05:11:37 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:12:47.729 05:11:37 -- target/tls.sh@169 -- # key_long_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:47.729 05:11:37 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:12:47.729 05:11:37 -- target/tls.sh@171 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:47.729 05:11:37 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:12:47.729 05:11:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:47.729 05:11:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:47.729 05:11:37 -- common/autotest_common.sh@10 -- # set +x 00:12:47.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:47.729 05:11:37 -- nvmf/common.sh@469 -- # nvmfpid=77270 00:12:47.729 05:11:37 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:47.729 05:11:37 -- nvmf/common.sh@470 -- # waitforlisten 77270 00:12:47.729 05:11:37 -- common/autotest_common.sh@829 -- # '[' -z 77270 ']' 00:12:47.729 05:11:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:47.729 05:11:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:47.729 05:11:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:47.729 05:11:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:47.729 05:11:37 -- common/autotest_common.sh@10 -- # set +x 00:12:47.729 [2024-12-08 05:11:37.478969] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:47.729 [2024-12-08 05:11:37.479076] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:47.987 [2024-12-08 05:11:37.610754] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:47.987 [2024-12-08 05:11:37.645154] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:47.987 [2024-12-08 05:11:37.645526] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:47.987 [2024-12-08 05:11:37.645549] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:47.987 [2024-12-08 05:11:37.645559] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:47.987 [2024-12-08 05:11:37.645590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:48.924 05:11:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:48.924 05:11:38 -- common/autotest_common.sh@862 -- # return 0 00:12:48.924 05:11:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:48.924 05:11:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:48.924 05:11:38 -- common/autotest_common.sh@10 -- # set +x 00:12:48.924 05:11:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:48.924 05:11:38 -- target/tls.sh@174 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:48.925 05:11:38 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:48.925 05:11:38 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:49.184 [2024-12-08 05:11:38.786017] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:49.184 05:11:38 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:49.450 05:11:39 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:12:49.707 [2024-12-08 05:11:39.406127] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:49.708 [2024-12-08 05:11:39.406370] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:49.708 05:11:39 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:49.965 malloc0 00:12:50.223 05:11:39 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:50.481 05:11:40 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:50.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:50.740 05:11:40 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:50.740 05:11:40 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:50.740 05:11:40 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:50.740 05:11:40 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:50.740 05:11:40 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:12:50.740 05:11:40 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:50.740 05:11:40 -- target/tls.sh@28 -- # bdevperf_pid=77329 00:12:50.740 05:11:40 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:50.740 05:11:40 -- target/tls.sh@31 -- # waitforlisten 77329 /var/tmp/bdevperf.sock 00:12:50.740 05:11:40 -- common/autotest_common.sh@829 -- # '[' -z 77329 ']' 00:12:50.740 05:11:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:50.740 05:11:40 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:50.740 05:11:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:50.740 05:11:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:50.740 05:11:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:50.740 05:11:40 -- common/autotest_common.sh@10 -- # set +x 00:12:50.740 [2024-12-08 05:11:40.369929] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:50.740 [2024-12-08 05:11:40.370238] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77329 ] 00:12:50.740 [2024-12-08 05:11:40.503551] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.051 [2024-12-08 05:11:40.547095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:51.051 05:11:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:51.051 05:11:40 -- common/autotest_common.sh@862 -- # return 0 00:12:51.051 05:11:40 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:51.308 [2024-12-08 05:11:40.908150] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:51.308 TLSTESTn1 00:12:51.308 05:11:40 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:12:51.566 Running I/O for 10 seconds... 00:13:01.587 00:13:01.587 Latency(us) 00:13:01.587 [2024-12-08T05:11:51.373Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:01.587 [2024-12-08T05:11:51.373Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:01.587 Verification LBA range: start 0x0 length 0x2000 00:13:01.587 TLSTESTn1 : 10.01 5271.67 20.59 0.00 0.00 24244.51 3798.11 27405.96 00:13:01.587 [2024-12-08T05:11:51.374Z] =================================================================================================================== 00:13:01.588 [2024-12-08T05:11:51.374Z] Total : 5271.67 20.59 0.00 0.00 24244.51 3798.11 27405.96 00:13:01.588 0 00:13:01.588 05:11:51 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:01.588 05:11:51 -- target/tls.sh@45 -- # killprocess 77329 00:13:01.588 05:11:51 -- common/autotest_common.sh@936 -- # '[' -z 77329 ']' 00:13:01.588 05:11:51 -- common/autotest_common.sh@940 -- # kill -0 77329 00:13:01.588 05:11:51 -- common/autotest_common.sh@941 -- # uname 00:13:01.588 05:11:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:01.588 05:11:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77329 00:13:01.588 killing process with pid 77329 00:13:01.588 Received shutdown signal, test time was about 10.000000 seconds 00:13:01.588 00:13:01.588 Latency(us) 00:13:01.588 [2024-12-08T05:11:51.374Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:01.588 [2024-12-08T05:11:51.374Z] =================================================================================================================== 00:13:01.588 [2024-12-08T05:11:51.374Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:01.588 05:11:51 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:13:01.588 05:11:51 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:13:01.588 05:11:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77329' 00:13:01.588 05:11:51 -- common/autotest_common.sh@955 -- # kill 77329 00:13:01.588 05:11:51 -- common/autotest_common.sh@960 -- # wait 77329 00:13:01.588 05:11:51 -- target/tls.sh@179 -- # chmod 0666 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:13:01.588 05:11:51 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:13:01.588 05:11:51 -- common/autotest_common.sh@650 -- # local es=0 00:13:01.588 05:11:51 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:13:01.588 05:11:51 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:13:01.588 05:11:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:01.588 05:11:51 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:13:01.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:01.588 05:11:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:01.588 05:11:51 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:13:01.588 05:11:51 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:01.588 05:11:51 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:01.588 05:11:51 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:01.588 05:11:51 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:13:01.588 05:11:51 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:01.588 05:11:51 -- target/tls.sh@28 -- # bdevperf_pid=77452 00:13:01.588 05:11:51 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:01.588 05:11:51 -- target/tls.sh@31 -- # waitforlisten 77452 /var/tmp/bdevperf.sock 00:13:01.588 05:11:51 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:01.588 05:11:51 -- common/autotest_common.sh@829 -- # '[' -z 77452 ']' 00:13:01.588 05:11:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:01.588 05:11:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:01.588 05:11:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:01.588 05:11:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:01.588 05:11:51 -- common/autotest_common.sh@10 -- # set +x 00:13:01.846 [2024-12-08 05:11:51.390943] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:01.846 [2024-12-08 05:11:51.391264] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77452 ] 00:13:01.846 [2024-12-08 05:11:51.527890] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.846 [2024-12-08 05:11:51.564469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:02.104 05:11:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:02.104 05:11:51 -- common/autotest_common.sh@862 -- # return 0 00:13:02.104 05:11:51 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:13:02.363 [2024-12-08 05:11:51.931160] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:02.363 [2024-12-08 05:11:51.931462] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:13:02.363 request: 00:13:02.363 { 00:13:02.363 "name": "TLSTEST", 00:13:02.363 "trtype": "tcp", 00:13:02.363 "traddr": "10.0.0.2", 00:13:02.363 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:02.363 "adrfam": "ipv4", 00:13:02.363 "trsvcid": "4420", 00:13:02.363 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:02.363 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:13:02.363 "method": "bdev_nvme_attach_controller", 00:13:02.363 "req_id": 1 00:13:02.363 } 00:13:02.363 Got JSON-RPC error response 00:13:02.363 response: 00:13:02.363 { 00:13:02.363 "code": -22, 00:13:02.363 "message": "Could not retrieve PSK from file: /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:13:02.363 } 00:13:02.363 05:11:51 -- target/tls.sh@36 -- # killprocess 77452 00:13:02.363 05:11:51 -- common/autotest_common.sh@936 -- # '[' -z 77452 ']' 00:13:02.363 05:11:51 -- common/autotest_common.sh@940 -- # kill -0 77452 00:13:02.363 05:11:51 -- common/autotest_common.sh@941 -- # uname 00:13:02.363 05:11:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:02.363 05:11:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77452 00:13:02.363 killing process with pid 77452 00:13:02.363 Received shutdown signal, test time was about 10.000000 seconds 00:13:02.363 00:13:02.363 Latency(us) 00:13:02.363 [2024-12-08T05:11:52.149Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:02.363 [2024-12-08T05:11:52.149Z] =================================================================================================================== 00:13:02.363 [2024-12-08T05:11:52.149Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:02.363 05:11:51 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:13:02.363 05:11:51 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:13:02.363 05:11:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77452' 00:13:02.363 05:11:51 -- common/autotest_common.sh@955 -- # kill 77452 00:13:02.363 05:11:51 -- common/autotest_common.sh@960 -- # wait 77452 00:13:02.363 05:11:52 -- target/tls.sh@37 -- # return 1 00:13:02.363 05:11:52 -- common/autotest_common.sh@653 -- # es=1 00:13:02.363 05:11:52 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:02.363 05:11:52 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:02.363 05:11:52 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:02.363 05:11:52 -- target/tls.sh@183 -- # killprocess 77270 00:13:02.364 05:11:52 -- common/autotest_common.sh@936 -- # '[' -z 77270 ']' 00:13:02.364 05:11:52 -- common/autotest_common.sh@940 -- # kill -0 77270 00:13:02.364 05:11:52 -- common/autotest_common.sh@941 -- # uname 00:13:02.364 05:11:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:02.364 05:11:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77270 00:13:02.623 killing process with pid 77270 00:13:02.623 05:11:52 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:02.623 05:11:52 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:02.623 05:11:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77270' 00:13:02.623 05:11:52 -- common/autotest_common.sh@955 -- # kill 77270 00:13:02.623 05:11:52 -- common/autotest_common.sh@960 -- # wait 77270 00:13:02.623 05:11:52 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:13:02.623 05:11:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:02.623 05:11:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:02.623 05:11:52 -- common/autotest_common.sh@10 -- # set +x 00:13:02.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.623 05:11:52 -- nvmf/common.sh@469 -- # nvmfpid=77477 00:13:02.623 05:11:52 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:02.623 05:11:52 -- nvmf/common.sh@470 -- # waitforlisten 77477 00:13:02.623 05:11:52 -- common/autotest_common.sh@829 -- # '[' -z 77477 ']' 00:13:02.623 05:11:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.623 05:11:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:02.623 05:11:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.623 05:11:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:02.623 05:11:52 -- common/autotest_common.sh@10 -- # set +x 00:13:02.623 [2024-12-08 05:11:52.383453] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:02.623 [2024-12-08 05:11:52.383806] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:02.881 [2024-12-08 05:11:52.524760] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.881 [2024-12-08 05:11:52.559230] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:02.881 [2024-12-08 05:11:52.559553] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:02.881 [2024-12-08 05:11:52.559703] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:02.881 [2024-12-08 05:11:52.559836] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:02.881 [2024-12-08 05:11:52.560056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:02.881 05:11:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:02.881 05:11:52 -- common/autotest_common.sh@862 -- # return 0 00:13:02.881 05:11:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:02.881 05:11:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:02.881 05:11:52 -- common/autotest_common.sh@10 -- # set +x 00:13:03.157 05:11:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:03.157 05:11:52 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:13:03.157 05:11:52 -- common/autotest_common.sh@650 -- # local es=0 00:13:03.157 05:11:52 -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:13:03.157 05:11:52 -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:13:03.157 05:11:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:03.157 05:11:52 -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:13:03.157 05:11:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:03.157 05:11:52 -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:13:03.157 05:11:52 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:13:03.157 05:11:52 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:03.416 [2024-12-08 05:11:52.945442] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:03.416 05:11:52 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:03.675 05:11:53 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:03.935 [2024-12-08 05:11:53.549609] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:03.935 [2024-12-08 05:11:53.549865] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:03.935 05:11:53 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:04.193 malloc0 00:13:04.193 05:11:53 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:04.452 05:11:54 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:13:04.711 [2024-12-08 05:11:54.493105] tcp.c:3551:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:13:04.711 [2024-12-08 05:11:54.493159] tcp.c:3620:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:13:04.711 [2024-12-08 05:11:54.493194] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:13:04.970 request: 00:13:04.971 { 00:13:04.971 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:04.971 "host": "nqn.2016-06.io.spdk:host1", 00:13:04.971 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:13:04.971 "method": "nvmf_subsystem_add_host", 00:13:04.971 "req_id": 1 00:13:04.971 } 00:13:04.971 Got JSON-RPC error response 00:13:04.971 response: 00:13:04.971 { 00:13:04.971 "code": -32603, 00:13:04.971 "message": "Internal error" 00:13:04.971 } 00:13:04.971 05:11:54 -- common/autotest_common.sh@653 -- # es=1 00:13:04.971 05:11:54 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:04.971 05:11:54 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:04.971 05:11:54 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:04.971 05:11:54 -- target/tls.sh@189 -- # killprocess 77477 00:13:04.971 05:11:54 -- common/autotest_common.sh@936 -- # '[' -z 77477 ']' 00:13:04.971 05:11:54 -- common/autotest_common.sh@940 -- # kill -0 77477 00:13:04.971 05:11:54 -- common/autotest_common.sh@941 -- # uname 00:13:04.971 05:11:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:04.971 05:11:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77477 00:13:04.971 killing process with pid 77477 00:13:04.971 05:11:54 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:04.971 05:11:54 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:04.971 05:11:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77477' 00:13:04.971 05:11:54 -- common/autotest_common.sh@955 -- # kill 77477 00:13:04.971 05:11:54 -- common/autotest_common.sh@960 -- # wait 77477 00:13:04.971 05:11:54 -- target/tls.sh@190 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:13:04.971 05:11:54 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:13:04.971 05:11:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:04.971 05:11:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:04.971 05:11:54 -- common/autotest_common.sh@10 -- # set +x 00:13:04.971 05:11:54 -- nvmf/common.sh@469 -- # nvmfpid=77532 00:13:04.971 05:11:54 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:04.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:04.971 05:11:54 -- nvmf/common.sh@470 -- # waitforlisten 77532 00:13:04.971 05:11:54 -- common/autotest_common.sh@829 -- # '[' -z 77532 ']' 00:13:04.971 05:11:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:04.971 05:11:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:04.971 05:11:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:04.971 05:11:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:04.971 05:11:54 -- common/autotest_common.sh@10 -- # set +x 00:13:05.239 [2024-12-08 05:11:54.785159] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:05.239 [2024-12-08 05:11:54.785575] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:05.239 [2024-12-08 05:11:54.925630] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:05.239 [2024-12-08 05:11:54.965156] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:05.239 [2024-12-08 05:11:54.965531] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:05.239 [2024-12-08 05:11:54.965558] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:05.239 [2024-12-08 05:11:54.965570] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:05.239 [2024-12-08 05:11:54.965602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:06.169 05:11:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:06.169 05:11:55 -- common/autotest_common.sh@862 -- # return 0 00:13:06.169 05:11:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:06.169 05:11:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:06.169 05:11:55 -- common/autotest_common.sh@10 -- # set +x 00:13:06.169 05:11:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:06.169 05:11:55 -- target/tls.sh@194 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:13:06.169 05:11:55 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:13:06.169 05:11:55 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:06.427 [2024-12-08 05:11:56.120834] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:06.427 05:11:56 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:06.700 05:11:56 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:06.959 [2024-12-08 05:11:56.717065] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:06.959 [2024-12-08 05:11:56.717564] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:06.959 05:11:56 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:07.217 malloc0 00:13:07.217 05:11:56 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:07.783 05:11:57 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:13:08.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:08.042 05:11:57 -- target/tls.sh@197 -- # bdevperf_pid=77592 00:13:08.042 05:11:57 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:08.042 05:11:57 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:08.042 05:11:57 -- target/tls.sh@200 -- # waitforlisten 77592 /var/tmp/bdevperf.sock 00:13:08.042 05:11:57 -- common/autotest_common.sh@829 -- # '[' -z 77592 ']' 00:13:08.042 05:11:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:08.042 05:11:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:08.042 05:11:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:08.042 05:11:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:08.042 05:11:57 -- common/autotest_common.sh@10 -- # set +x 00:13:08.042 [2024-12-08 05:11:57.645752] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:08.042 [2024-12-08 05:11:57.646087] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77592 ] 00:13:08.042 [2024-12-08 05:11:57.787638] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.299 [2024-12-08 05:11:57.831755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:08.299 05:11:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:08.299 05:11:57 -- common/autotest_common.sh@862 -- # return 0 00:13:08.299 05:11:57 -- target/tls.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:13:08.556 [2024-12-08 05:11:58.211651] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:08.556 TLSTESTn1 00:13:08.556 05:11:58 -- target/tls.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:09.123 05:11:58 -- target/tls.sh@205 -- # tgtconf='{ 00:13:09.123 "subsystems": [ 00:13:09.123 { 00:13:09.123 "subsystem": "iobuf", 00:13:09.123 "config": [ 00:13:09.123 { 00:13:09.123 "method": "iobuf_set_options", 00:13:09.123 "params": { 00:13:09.123 "small_pool_count": 8192, 00:13:09.123 "large_pool_count": 1024, 00:13:09.123 "small_bufsize": 8192, 00:13:09.123 "large_bufsize": 135168 00:13:09.123 } 00:13:09.123 } 00:13:09.123 ] 00:13:09.123 }, 00:13:09.123 { 00:13:09.123 "subsystem": "sock", 00:13:09.123 "config": [ 00:13:09.123 { 00:13:09.123 "method": "sock_impl_set_options", 00:13:09.123 "params": { 00:13:09.123 "impl_name": "uring", 00:13:09.123 "recv_buf_size": 2097152, 00:13:09.123 "send_buf_size": 2097152, 00:13:09.123 "enable_recv_pipe": true, 00:13:09.123 "enable_quickack": false, 00:13:09.123 "enable_placement_id": 0, 00:13:09.123 "enable_zerocopy_send_server": false, 00:13:09.123 "enable_zerocopy_send_client": false, 00:13:09.123 "zerocopy_threshold": 0, 00:13:09.123 "tls_version": 0, 00:13:09.123 "enable_ktls": false 00:13:09.123 } 00:13:09.123 }, 00:13:09.123 { 00:13:09.123 "method": "sock_impl_set_options", 00:13:09.123 "params": { 00:13:09.123 "impl_name": "posix", 00:13:09.123 "recv_buf_size": 2097152, 00:13:09.123 "send_buf_size": 2097152, 00:13:09.123 "enable_recv_pipe": true, 00:13:09.123 "enable_quickack": false, 00:13:09.123 "enable_placement_id": 0, 00:13:09.123 "enable_zerocopy_send_server": true, 00:13:09.123 "enable_zerocopy_send_client": false, 00:13:09.123 "zerocopy_threshold": 0, 00:13:09.123 "tls_version": 0, 00:13:09.123 "enable_ktls": false 00:13:09.123 } 00:13:09.123 }, 00:13:09.123 { 00:13:09.123 "method": "sock_impl_set_options", 00:13:09.123 "params": { 00:13:09.123 "impl_name": "ssl", 00:13:09.123 "recv_buf_size": 4096, 00:13:09.123 "send_buf_size": 4096, 00:13:09.123 "enable_recv_pipe": true, 00:13:09.123 "enable_quickack": false, 00:13:09.123 "enable_placement_id": 0, 00:13:09.123 "enable_zerocopy_send_server": true, 00:13:09.123 "enable_zerocopy_send_client": false, 00:13:09.123 "zerocopy_threshold": 0, 00:13:09.123 "tls_version": 0, 00:13:09.123 "enable_ktls": false 00:13:09.123 } 00:13:09.123 } 00:13:09.123 ] 00:13:09.123 }, 00:13:09.123 { 00:13:09.123 "subsystem": "vmd", 00:13:09.123 "config": [] 00:13:09.123 }, 00:13:09.123 { 00:13:09.123 "subsystem": "accel", 00:13:09.123 "config": [ 00:13:09.123 { 00:13:09.123 "method": "accel_set_options", 00:13:09.123 "params": { 00:13:09.123 "small_cache_size": 128, 00:13:09.123 "large_cache_size": 16, 00:13:09.123 "task_count": 2048, 00:13:09.123 "sequence_count": 2048, 00:13:09.123 "buf_count": 2048 00:13:09.123 } 00:13:09.123 } 00:13:09.123 ] 00:13:09.123 }, 00:13:09.123 { 00:13:09.123 "subsystem": "bdev", 00:13:09.123 "config": [ 00:13:09.123 { 00:13:09.123 "method": "bdev_set_options", 00:13:09.123 "params": { 00:13:09.123 "bdev_io_pool_size": 65535, 00:13:09.123 "bdev_io_cache_size": 256, 00:13:09.123 "bdev_auto_examine": true, 00:13:09.123 "iobuf_small_cache_size": 128, 00:13:09.124 "iobuf_large_cache_size": 16 00:13:09.124 } 00:13:09.124 }, 00:13:09.124 { 00:13:09.124 "method": "bdev_raid_set_options", 00:13:09.124 "params": { 00:13:09.124 "process_window_size_kb": 1024 00:13:09.124 } 00:13:09.124 }, 00:13:09.124 { 00:13:09.124 "method": "bdev_iscsi_set_options", 00:13:09.124 "params": { 00:13:09.124 "timeout_sec": 30 00:13:09.124 } 00:13:09.124 }, 00:13:09.124 { 00:13:09.124 "method": "bdev_nvme_set_options", 00:13:09.124 "params": { 00:13:09.124 "action_on_timeout": "none", 00:13:09.124 "timeout_us": 0, 00:13:09.124 "timeout_admin_us": 0, 00:13:09.124 "keep_alive_timeout_ms": 10000, 00:13:09.124 "transport_retry_count": 4, 00:13:09.124 "arbitration_burst": 0, 00:13:09.124 "low_priority_weight": 0, 00:13:09.124 "medium_priority_weight": 0, 00:13:09.124 "high_priority_weight": 0, 00:13:09.124 "nvme_adminq_poll_period_us": 10000, 00:13:09.124 "nvme_ioq_poll_period_us": 0, 00:13:09.124 "io_queue_requests": 0, 00:13:09.124 "delay_cmd_submit": true, 00:13:09.124 "bdev_retry_count": 3, 00:13:09.124 "transport_ack_timeout": 0, 00:13:09.124 "ctrlr_loss_timeout_sec": 0, 00:13:09.124 "reconnect_delay_sec": 0, 00:13:09.124 "fast_io_fail_timeout_sec": 0, 00:13:09.124 "generate_uuids": false, 00:13:09.124 "transport_tos": 0, 00:13:09.124 "io_path_stat": false, 00:13:09.124 "allow_accel_sequence": false 00:13:09.124 } 00:13:09.124 }, 00:13:09.124 { 00:13:09.124 "method": "bdev_nvme_set_hotplug", 00:13:09.124 "params": { 00:13:09.124 "period_us": 100000, 00:13:09.124 "enable": false 00:13:09.124 } 00:13:09.124 }, 00:13:09.124 { 00:13:09.124 "method": "bdev_malloc_create", 00:13:09.124 "params": { 00:13:09.124 "name": "malloc0", 00:13:09.124 "num_blocks": 8192, 00:13:09.124 "block_size": 4096, 00:13:09.124 "physical_block_size": 4096, 00:13:09.124 "uuid": "972b9260-3889-4858-bc00-e3786f632f14", 00:13:09.124 "optimal_io_boundary": 0 00:13:09.124 } 00:13:09.124 }, 00:13:09.124 { 00:13:09.124 "method": "bdev_wait_for_examine" 00:13:09.124 } 00:13:09.124 ] 00:13:09.124 }, 00:13:09.124 { 00:13:09.124 "subsystem": "nbd", 00:13:09.124 "config": [] 00:13:09.124 }, 00:13:09.124 { 00:13:09.124 "subsystem": "scheduler", 00:13:09.124 "config": [ 00:13:09.124 { 00:13:09.124 "method": "framework_set_scheduler", 00:13:09.124 "params": { 00:13:09.124 "name": "static" 00:13:09.124 } 00:13:09.124 } 00:13:09.124 ] 00:13:09.124 }, 00:13:09.124 { 00:13:09.124 "subsystem": "nvmf", 00:13:09.124 "config": [ 00:13:09.124 { 00:13:09.124 "method": "nvmf_set_config", 00:13:09.124 "params": { 00:13:09.124 "discovery_filter": "match_any", 00:13:09.124 "admin_cmd_passthru": { 00:13:09.124 "identify_ctrlr": false 00:13:09.124 } 00:13:09.124 } 00:13:09.124 }, 00:13:09.124 { 00:13:09.124 "method": "nvmf_set_max_subsystems", 00:13:09.124 "params": { 00:13:09.124 "max_subsystems": 1024 00:13:09.124 } 00:13:09.124 }, 00:13:09.124 { 00:13:09.124 "method": "nvmf_set_crdt", 00:13:09.124 "params": { 00:13:09.124 "crdt1": 0, 00:13:09.124 "crdt2": 0, 00:13:09.124 "crdt3": 0 00:13:09.124 } 00:13:09.124 }, 00:13:09.124 { 00:13:09.124 "method": "nvmf_create_transport", 00:13:09.124 "params": { 00:13:09.124 "trtype": "TCP", 00:13:09.124 "max_queue_depth": 128, 00:13:09.124 "max_io_qpairs_per_ctrlr": 127, 00:13:09.124 "in_capsule_data_size": 4096, 00:13:09.124 "max_io_size": 131072, 00:13:09.124 "io_unit_size": 131072, 00:13:09.124 "max_aq_depth": 128, 00:13:09.124 "num_shared_buffers": 511, 00:13:09.124 "buf_cache_size": 4294967295, 00:13:09.124 "dif_insert_or_strip": false, 00:13:09.124 "zcopy": false, 00:13:09.124 "c2h_success": false, 00:13:09.124 "sock_priority": 0, 00:13:09.124 "abort_timeout_sec": 1 00:13:09.124 } 00:13:09.124 }, 00:13:09.124 { 00:13:09.124 "method": "nvmf_create_subsystem", 00:13:09.124 "params": { 00:13:09.124 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:09.124 "allow_any_host": false, 00:13:09.124 "serial_number": "SPDK00000000000001", 00:13:09.124 "model_number": "SPDK bdev Controller", 00:13:09.124 "max_namespaces": 10, 00:13:09.124 "min_cntlid": 1, 00:13:09.124 "max_cntlid": 65519, 00:13:09.124 "ana_reporting": false 00:13:09.124 } 00:13:09.124 }, 00:13:09.124 { 00:13:09.124 "method": "nvmf_subsystem_add_host", 00:13:09.124 "params": { 00:13:09.124 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:09.124 "host": "nqn.2016-06.io.spdk:host1", 00:13:09.124 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:13:09.124 } 00:13:09.124 }, 00:13:09.124 { 00:13:09.124 "method": "nvmf_subsystem_add_ns", 00:13:09.124 "params": { 00:13:09.124 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:09.124 "namespace": { 00:13:09.124 "nsid": 1, 00:13:09.124 "bdev_name": "malloc0", 00:13:09.124 "nguid": "972B926038894858BC00E3786F632F14", 00:13:09.124 "uuid": "972b9260-3889-4858-bc00-e3786f632f14" 00:13:09.124 } 00:13:09.124 } 00:13:09.124 }, 00:13:09.124 { 00:13:09.124 "method": "nvmf_subsystem_add_listener", 00:13:09.124 "params": { 00:13:09.124 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:09.124 "listen_address": { 00:13:09.124 "trtype": "TCP", 00:13:09.124 "adrfam": "IPv4", 00:13:09.124 "traddr": "10.0.0.2", 00:13:09.124 "trsvcid": "4420" 00:13:09.124 }, 00:13:09.124 "secure_channel": true 00:13:09.124 } 00:13:09.124 } 00:13:09.124 ] 00:13:09.124 } 00:13:09.124 ] 00:13:09.124 }' 00:13:09.124 05:11:58 -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:13:09.383 05:11:58 -- target/tls.sh@206 -- # bdevperfconf='{ 00:13:09.383 "subsystems": [ 00:13:09.383 { 00:13:09.383 "subsystem": "iobuf", 00:13:09.383 "config": [ 00:13:09.383 { 00:13:09.383 "method": "iobuf_set_options", 00:13:09.383 "params": { 00:13:09.383 "small_pool_count": 8192, 00:13:09.383 "large_pool_count": 1024, 00:13:09.383 "small_bufsize": 8192, 00:13:09.383 "large_bufsize": 135168 00:13:09.383 } 00:13:09.383 } 00:13:09.383 ] 00:13:09.383 }, 00:13:09.383 { 00:13:09.383 "subsystem": "sock", 00:13:09.383 "config": [ 00:13:09.383 { 00:13:09.383 "method": "sock_impl_set_options", 00:13:09.383 "params": { 00:13:09.383 "impl_name": "uring", 00:13:09.383 "recv_buf_size": 2097152, 00:13:09.383 "send_buf_size": 2097152, 00:13:09.383 "enable_recv_pipe": true, 00:13:09.383 "enable_quickack": false, 00:13:09.383 "enable_placement_id": 0, 00:13:09.383 "enable_zerocopy_send_server": false, 00:13:09.383 "enable_zerocopy_send_client": false, 00:13:09.383 "zerocopy_threshold": 0, 00:13:09.383 "tls_version": 0, 00:13:09.383 "enable_ktls": false 00:13:09.383 } 00:13:09.383 }, 00:13:09.383 { 00:13:09.383 "method": "sock_impl_set_options", 00:13:09.383 "params": { 00:13:09.383 "impl_name": "posix", 00:13:09.383 "recv_buf_size": 2097152, 00:13:09.383 "send_buf_size": 2097152, 00:13:09.383 "enable_recv_pipe": true, 00:13:09.383 "enable_quickack": false, 00:13:09.383 "enable_placement_id": 0, 00:13:09.383 "enable_zerocopy_send_server": true, 00:13:09.383 "enable_zerocopy_send_client": false, 00:13:09.383 "zerocopy_threshold": 0, 00:13:09.383 "tls_version": 0, 00:13:09.383 "enable_ktls": false 00:13:09.383 } 00:13:09.383 }, 00:13:09.383 { 00:13:09.383 "method": "sock_impl_set_options", 00:13:09.383 "params": { 00:13:09.383 "impl_name": "ssl", 00:13:09.383 "recv_buf_size": 4096, 00:13:09.383 "send_buf_size": 4096, 00:13:09.383 "enable_recv_pipe": true, 00:13:09.383 "enable_quickack": false, 00:13:09.383 "enable_placement_id": 0, 00:13:09.383 "enable_zerocopy_send_server": true, 00:13:09.383 "enable_zerocopy_send_client": false, 00:13:09.383 "zerocopy_threshold": 0, 00:13:09.383 "tls_version": 0, 00:13:09.383 "enable_ktls": false 00:13:09.383 } 00:13:09.383 } 00:13:09.383 ] 00:13:09.383 }, 00:13:09.383 { 00:13:09.383 "subsystem": "vmd", 00:13:09.383 "config": [] 00:13:09.383 }, 00:13:09.383 { 00:13:09.383 "subsystem": "accel", 00:13:09.383 "config": [ 00:13:09.383 { 00:13:09.383 "method": "accel_set_options", 00:13:09.384 "params": { 00:13:09.384 "small_cache_size": 128, 00:13:09.384 "large_cache_size": 16, 00:13:09.384 "task_count": 2048, 00:13:09.384 "sequence_count": 2048, 00:13:09.384 "buf_count": 2048 00:13:09.384 } 00:13:09.384 } 00:13:09.384 ] 00:13:09.384 }, 00:13:09.384 { 00:13:09.384 "subsystem": "bdev", 00:13:09.384 "config": [ 00:13:09.384 { 00:13:09.384 "method": "bdev_set_options", 00:13:09.384 "params": { 00:13:09.384 "bdev_io_pool_size": 65535, 00:13:09.384 "bdev_io_cache_size": 256, 00:13:09.384 "bdev_auto_examine": true, 00:13:09.384 "iobuf_small_cache_size": 128, 00:13:09.384 "iobuf_large_cache_size": 16 00:13:09.384 } 00:13:09.384 }, 00:13:09.384 { 00:13:09.384 "method": "bdev_raid_set_options", 00:13:09.384 "params": { 00:13:09.384 "process_window_size_kb": 1024 00:13:09.384 } 00:13:09.384 }, 00:13:09.384 { 00:13:09.384 "method": "bdev_iscsi_set_options", 00:13:09.384 "params": { 00:13:09.384 "timeout_sec": 30 00:13:09.384 } 00:13:09.384 }, 00:13:09.384 { 00:13:09.384 "method": "bdev_nvme_set_options", 00:13:09.384 "params": { 00:13:09.384 "action_on_timeout": "none", 00:13:09.384 "timeout_us": 0, 00:13:09.384 "timeout_admin_us": 0, 00:13:09.384 "keep_alive_timeout_ms": 10000, 00:13:09.384 "transport_retry_count": 4, 00:13:09.384 "arbitration_burst": 0, 00:13:09.384 "low_priority_weight": 0, 00:13:09.384 "medium_priority_weight": 0, 00:13:09.384 "high_priority_weight": 0, 00:13:09.384 "nvme_adminq_poll_period_us": 10000, 00:13:09.384 "nvme_ioq_poll_period_us": 0, 00:13:09.384 "io_queue_requests": 512, 00:13:09.384 "delay_cmd_submit": true, 00:13:09.384 "bdev_retry_count": 3, 00:13:09.384 "transport_ack_timeout": 0, 00:13:09.384 "ctrlr_loss_timeout_sec": 0, 00:13:09.384 "reconnect_delay_sec": 0, 00:13:09.384 "fast_io_fail_timeout_sec": 0, 00:13:09.384 "generate_uuids": false, 00:13:09.384 "transport_tos": 0, 00:13:09.384 "io_path_stat": false, 00:13:09.384 "allow_accel_sequence": false 00:13:09.384 } 00:13:09.384 }, 00:13:09.384 { 00:13:09.384 "method": "bdev_nvme_attach_controller", 00:13:09.384 "params": { 00:13:09.384 "name": "TLSTEST", 00:13:09.384 "trtype": "TCP", 00:13:09.384 "adrfam": "IPv4", 00:13:09.384 "traddr": "10.0.0.2", 00:13:09.384 "trsvcid": "4420", 00:13:09.384 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:09.384 "prchk_reftag": false, 00:13:09.384 "prchk_guard": false, 00:13:09.384 "ctrlr_loss_timeout_sec": 0, 00:13:09.384 "reconnect_delay_sec": 0, 00:13:09.384 "fast_io_fail_timeout_sec": 0, 00:13:09.384 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:13:09.384 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:09.384 "hdgst": false, 00:13:09.384 "ddgst": false 00:13:09.384 } 00:13:09.384 }, 00:13:09.384 { 00:13:09.384 "method": "bdev_nvme_set_hotplug", 00:13:09.384 "params": { 00:13:09.384 "period_us": 100000, 00:13:09.384 "enable": false 00:13:09.384 } 00:13:09.384 }, 00:13:09.384 { 00:13:09.384 "method": "bdev_wait_for_examine" 00:13:09.384 } 00:13:09.384 ] 00:13:09.384 }, 00:13:09.384 { 00:13:09.384 "subsystem": "nbd", 00:13:09.384 "config": [] 00:13:09.384 } 00:13:09.384 ] 00:13:09.384 }' 00:13:09.384 05:11:58 -- target/tls.sh@208 -- # killprocess 77592 00:13:09.384 05:11:58 -- common/autotest_common.sh@936 -- # '[' -z 77592 ']' 00:13:09.384 05:11:58 -- common/autotest_common.sh@940 -- # kill -0 77592 00:13:09.384 05:11:58 -- common/autotest_common.sh@941 -- # uname 00:13:09.384 05:11:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:09.384 05:11:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77592 00:13:09.384 05:11:59 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:13:09.384 05:11:59 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:13:09.384 killing process with pid 77592 00:13:09.384 05:11:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77592' 00:13:09.384 05:11:59 -- common/autotest_common.sh@955 -- # kill 77592 00:13:09.384 Received shutdown signal, test time was about 10.000000 seconds 00:13:09.384 00:13:09.384 Latency(us) 00:13:09.384 [2024-12-08T05:11:59.170Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:09.384 [2024-12-08T05:11:59.170Z] =================================================================================================================== 00:13:09.384 [2024-12-08T05:11:59.170Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:09.384 05:11:59 -- common/autotest_common.sh@960 -- # wait 77592 00:13:09.643 05:11:59 -- target/tls.sh@209 -- # killprocess 77532 00:13:09.643 05:11:59 -- common/autotest_common.sh@936 -- # '[' -z 77532 ']' 00:13:09.643 05:11:59 -- common/autotest_common.sh@940 -- # kill -0 77532 00:13:09.643 05:11:59 -- common/autotest_common.sh@941 -- # uname 00:13:09.643 05:11:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:09.643 05:11:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77532 00:13:09.643 killing process with pid 77532 00:13:09.643 05:11:59 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:09.643 05:11:59 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:09.643 05:11:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77532' 00:13:09.643 05:11:59 -- common/autotest_common.sh@955 -- # kill 77532 00:13:09.643 05:11:59 -- common/autotest_common.sh@960 -- # wait 77532 00:13:09.643 05:11:59 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:13:09.643 05:11:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:09.643 05:11:59 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:09.643 05:11:59 -- common/autotest_common.sh@10 -- # set +x 00:13:09.643 05:11:59 -- target/tls.sh@212 -- # echo '{ 00:13:09.643 "subsystems": [ 00:13:09.643 { 00:13:09.643 "subsystem": "iobuf", 00:13:09.643 "config": [ 00:13:09.643 { 00:13:09.643 "method": "iobuf_set_options", 00:13:09.643 "params": { 00:13:09.643 "small_pool_count": 8192, 00:13:09.643 "large_pool_count": 1024, 00:13:09.643 "small_bufsize": 8192, 00:13:09.643 "large_bufsize": 135168 00:13:09.643 } 00:13:09.643 } 00:13:09.643 ] 00:13:09.643 }, 00:13:09.643 { 00:13:09.643 "subsystem": "sock", 00:13:09.643 "config": [ 00:13:09.643 { 00:13:09.643 "method": "sock_impl_set_options", 00:13:09.643 "params": { 00:13:09.643 "impl_name": "uring", 00:13:09.643 "recv_buf_size": 2097152, 00:13:09.643 "send_buf_size": 2097152, 00:13:09.643 "enable_recv_pipe": true, 00:13:09.643 "enable_quickack": false, 00:13:09.643 "enable_placement_id": 0, 00:13:09.643 "enable_zerocopy_send_server": false, 00:13:09.643 "enable_zerocopy_send_client": false, 00:13:09.643 "zerocopy_threshold": 0, 00:13:09.643 "tls_version": 0, 00:13:09.643 "enable_ktls": false 00:13:09.643 } 00:13:09.643 }, 00:13:09.643 { 00:13:09.643 "method": "sock_impl_set_options", 00:13:09.643 "params": { 00:13:09.643 "impl_name": "posix", 00:13:09.643 "recv_buf_size": 2097152, 00:13:09.643 "send_buf_size": 2097152, 00:13:09.643 "enable_recv_pipe": true, 00:13:09.643 "enable_quickack": false, 00:13:09.643 "enable_placement_id": 0, 00:13:09.643 "enable_zerocopy_send_server": true, 00:13:09.643 "enable_zerocopy_send_client": false, 00:13:09.643 "zerocopy_threshold": 0, 00:13:09.643 "tls_version": 0, 00:13:09.643 "enable_ktls": false 00:13:09.643 } 00:13:09.643 }, 00:13:09.643 { 00:13:09.643 "method": "sock_impl_set_options", 00:13:09.643 "params": { 00:13:09.643 "impl_name": "ssl", 00:13:09.643 "recv_buf_size": 4096, 00:13:09.643 "send_buf_size": 4096, 00:13:09.643 "enable_recv_pipe": true, 00:13:09.643 "enable_quickack": false, 00:13:09.643 "enable_placement_id": 0, 00:13:09.643 "enable_zerocopy_send_server": true, 00:13:09.643 "enable_zerocopy_send_client": false, 00:13:09.643 "zerocopy_threshold": 0, 00:13:09.643 "tls_version": 0, 00:13:09.643 "enable_ktls": false 00:13:09.643 } 00:13:09.643 } 00:13:09.643 ] 00:13:09.643 }, 00:13:09.643 { 00:13:09.643 "subsystem": "vmd", 00:13:09.643 "config": [] 00:13:09.643 }, 00:13:09.643 { 00:13:09.643 "subsystem": "accel", 00:13:09.643 "config": [ 00:13:09.643 { 00:13:09.643 "method": "accel_set_options", 00:13:09.643 "params": { 00:13:09.643 "small_cache_size": 128, 00:13:09.643 "large_cache_size": 16, 00:13:09.643 "task_count": 2048, 00:13:09.643 "sequence_count": 2048, 00:13:09.643 "buf_count": 2048 00:13:09.643 } 00:13:09.643 } 00:13:09.643 ] 00:13:09.643 }, 00:13:09.643 { 00:13:09.643 "subsystem": "bdev", 00:13:09.643 "config": [ 00:13:09.643 { 00:13:09.643 "method": "bdev_set_options", 00:13:09.643 "params": { 00:13:09.643 "bdev_io_pool_size": 65535, 00:13:09.643 "bdev_io_cache_size": 256, 00:13:09.644 "bdev_auto_examine": true, 00:13:09.644 "iobuf_small_cache_size": 128, 00:13:09.644 "iobuf_large_cache_size": 16 00:13:09.644 } 00:13:09.644 }, 00:13:09.644 { 00:13:09.644 "method": "bdev_raid_set_options", 00:13:09.644 "params": { 00:13:09.644 "process_window_size_kb": 1024 00:13:09.644 } 00:13:09.644 }, 00:13:09.644 { 00:13:09.644 "method": "bdev_iscsi_set_options", 00:13:09.644 "params": { 00:13:09.644 "timeout_sec": 30 00:13:09.644 } 00:13:09.644 }, 00:13:09.644 { 00:13:09.644 "method": "bdev_nvme_set_options", 00:13:09.644 "params": { 00:13:09.644 "action_on_timeout": "none", 00:13:09.644 "timeout_us": 0, 00:13:09.644 "timeout_admin_us": 0, 00:13:09.644 "keep_alive_timeout_ms": 10000, 00:13:09.644 "transport_retry_count": 4, 00:13:09.644 "arbitration_burst": 0, 00:13:09.644 "low_priority_weight": 0, 00:13:09.644 "medium_priority_weight": 0, 00:13:09.644 "high_priority_weight": 0, 00:13:09.644 "nvme_adminq_poll_period_us": 10000, 00:13:09.644 "nvme_ioq_poll_period_us": 0, 00:13:09.644 "io_queue_requests": 0, 00:13:09.644 "delay_cmd_submit": true, 00:13:09.644 "bdev_retry_count": 3, 00:13:09.644 "transport_ack_timeout": 0, 00:13:09.644 "ctrlr_loss_timeout_sec": 0, 00:13:09.644 "reconnect_delay_sec": 0, 00:13:09.644 "fast_io_fail_timeout_sec": 0, 00:13:09.644 "generate_uuids": false, 00:13:09.644 "transport_tos": 0, 00:13:09.644 "io_path_stat": false, 00:13:09.644 "allow_accel_sequence": false 00:13:09.644 } 00:13:09.644 }, 00:13:09.644 { 00:13:09.644 "method": "bdev_nvme_set_hotplug", 00:13:09.644 "params": { 00:13:09.644 "period_us": 100000, 00:13:09.644 "enable": false 00:13:09.644 } 00:13:09.644 }, 00:13:09.644 { 00:13:09.644 "method": "bdev_malloc_create", 00:13:09.644 "params": { 00:13:09.644 "name": "malloc0", 00:13:09.644 "num_blocks": 8192, 00:13:09.644 "block_size": 4096, 00:13:09.644 "physical_block_size": 4096, 00:13:09.644 "uuid": "972b9260-3889-4858-bc00-e3786f632f14", 00:13:09.644 "optimal_io_boundary": 0 00:13:09.644 } 00:13:09.644 }, 00:13:09.644 { 00:13:09.644 "method": "bdev_wait_for_examine" 00:13:09.644 } 00:13:09.644 ] 00:13:09.644 }, 00:13:09.644 { 00:13:09.644 "subsystem": "nbd", 00:13:09.644 "config": [] 00:13:09.644 }, 00:13:09.644 { 00:13:09.644 "subsystem": "scheduler", 00:13:09.644 "config": [ 00:13:09.644 { 00:13:09.644 "method": "framework_set_scheduler", 00:13:09.644 "params": { 00:13:09.644 "name": "static" 00:13:09.644 } 00:13:09.644 } 00:13:09.644 ] 00:13:09.644 }, 00:13:09.644 { 00:13:09.644 "subsystem": "nvmf", 00:13:09.644 "config": [ 00:13:09.644 { 00:13:09.644 "method": "nvmf_set_config", 00:13:09.644 "params": { 00:13:09.644 "discovery_filter": "match_any", 00:13:09.644 "admin_cmd_passthru": { 00:13:09.644 "identify_ctrlr": false 00:13:09.644 } 00:13:09.644 } 00:13:09.644 }, 00:13:09.644 { 00:13:09.644 "method": "nvmf_set_max_subsystems", 00:13:09.644 "params": { 00:13:09.644 "max_subsystems": 1024 00:13:09.644 } 00:13:09.644 }, 00:13:09.644 { 00:13:09.644 "method": "nvmf_set_crdt", 00:13:09.644 "params": { 00:13:09.644 "crdt1": 0, 00:13:09.644 "crdt2": 0, 00:13:09.644 "crdt3": 0 00:13:09.644 } 00:13:09.644 }, 00:13:09.644 { 00:13:09.644 "method": "nvmf_create_transport", 00:13:09.644 "params": { 00:13:09.644 "trtype": "TCP", 00:13:09.644 "max_queue_depth": 128, 00:13:09.644 "max_io_qpairs_per_ctrlr": 127, 00:13:09.644 "in_capsule_data_size": 4096, 00:13:09.644 "max_io_size": 131072, 00:13:09.644 "io_unit_size": 131072, 00:13:09.644 "max_aq_depth": 128, 00:13:09.644 "num_shared_buffers": 511, 00:13:09.644 "buf_cache_size": 4294967295, 00:13:09.644 "dif_insert_or_strip": false, 00:13:09.644 "zcopy": false, 00:13:09.644 "c2h_success": false, 00:13:09.644 "sock_priority": 0, 00:13:09.644 "abort_timeout_sec": 1 00:13:09.644 } 00:13:09.644 }, 00:13:09.644 { 00:13:09.644 "method": "nvmf_create_subsystem", 00:13:09.644 "params": { 00:13:09.644 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:09.644 "allow_any_host": false, 00:13:09.644 "serial_number": "SPDK00000000000001", 00:13:09.644 "model_number": "SPDK bdev Controller", 00:13:09.644 "max_namespaces": 10, 00:13:09.644 "min_cntlid": 1, 00:13:09.644 "max_cntlid": 65519, 00:13:09.644 "ana_reporting": false 00:13:09.644 } 00:13:09.644 }, 00:13:09.644 { 00:13:09.644 "method": "nvmf_subsystem_add_host", 00:13:09.644 "params": { 00:13:09.644 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:09.644 "host": "nqn.2016-06.io.spdk:host1", 00:13:09.644 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:13:09.644 } 00:13:09.644 }, 00:13:09.644 { 00:13:09.644 "method": "nvmf_subsystem_add_ns", 00:13:09.644 "params": { 00:13:09.644 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:09.644 "namespace": { 00:13:09.644 "nsid": 1, 00:13:09.644 "bdev_name": "malloc0", 00:13:09.644 "nguid": "972B926038894858BC00E3786F632F14", 00:13:09.644 "uuid": "972b9260-3889-4858-bc00-e3786f632f14" 00:13:09.644 } 00:13:09.644 } 00:13:09.644 }, 00:13:09.644 { 00:13:09.644 "method": "nvmf_subsystem_add_listener", 00:13:09.644 "params": { 00:13:09.644 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:09.644 "listen_address": { 00:13:09.644 "trtype": "TCP", 00:13:09.644 "adrfam": "IPv4", 00:13:09.644 "traddr": "10.0.0.2", 00:13:09.644 "trsvcid": "4420" 00:13:09.644 }, 00:13:09.644 "secure_channel": true 00:13:09.644 } 00:13:09.644 } 00:13:09.644 ] 00:13:09.644 } 00:13:09.644 ] 00:13:09.644 }' 00:13:09.644 05:11:59 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:13:09.644 05:11:59 -- nvmf/common.sh@469 -- # nvmfpid=77630 00:13:09.644 05:11:59 -- nvmf/common.sh@470 -- # waitforlisten 77630 00:13:09.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.644 05:11:59 -- common/autotest_common.sh@829 -- # '[' -z 77630 ']' 00:13:09.644 05:11:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.644 05:11:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:09.644 05:11:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.644 05:11:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:09.644 05:11:59 -- common/autotest_common.sh@10 -- # set +x 00:13:09.903 [2024-12-08 05:11:59.431100] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:09.903 [2024-12-08 05:11:59.431389] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:09.903 [2024-12-08 05:11:59.576271] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.903 [2024-12-08 05:11:59.630695] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:09.903 [2024-12-08 05:11:59.631251] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:09.903 [2024-12-08 05:11:59.631291] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:09.903 [2024-12-08 05:11:59.631309] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:09.903 [2024-12-08 05:11:59.631365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:10.162 [2024-12-08 05:11:59.822508] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:10.162 [2024-12-08 05:11:59.854467] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:10.162 [2024-12-08 05:11:59.854795] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:10.731 05:12:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:10.731 05:12:00 -- common/autotest_common.sh@862 -- # return 0 00:13:10.731 05:12:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:10.731 05:12:00 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:10.731 05:12:00 -- common/autotest_common.sh@10 -- # set +x 00:13:10.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:10.997 05:12:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:10.997 05:12:00 -- target/tls.sh@216 -- # bdevperf_pid=77664 00:13:10.997 05:12:00 -- target/tls.sh@217 -- # waitforlisten 77664 /var/tmp/bdevperf.sock 00:13:10.997 05:12:00 -- common/autotest_common.sh@829 -- # '[' -z 77664 ']' 00:13:10.997 05:12:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:10.997 05:12:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:10.997 05:12:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:10.997 05:12:00 -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:13:10.997 05:12:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:10.997 05:12:00 -- common/autotest_common.sh@10 -- # set +x 00:13:10.997 05:12:00 -- target/tls.sh@213 -- # echo '{ 00:13:10.997 "subsystems": [ 00:13:10.997 { 00:13:10.997 "subsystem": "iobuf", 00:13:10.997 "config": [ 00:13:10.997 { 00:13:10.997 "method": "iobuf_set_options", 00:13:10.997 "params": { 00:13:10.997 "small_pool_count": 8192, 00:13:10.997 "large_pool_count": 1024, 00:13:10.997 "small_bufsize": 8192, 00:13:10.997 "large_bufsize": 135168 00:13:10.997 } 00:13:10.997 } 00:13:10.997 ] 00:13:10.997 }, 00:13:10.997 { 00:13:10.997 "subsystem": "sock", 00:13:10.997 "config": [ 00:13:10.997 { 00:13:10.997 "method": "sock_impl_set_options", 00:13:10.997 "params": { 00:13:10.997 "impl_name": "uring", 00:13:10.997 "recv_buf_size": 2097152, 00:13:10.997 "send_buf_size": 2097152, 00:13:10.997 "enable_recv_pipe": true, 00:13:10.997 "enable_quickack": false, 00:13:10.997 "enable_placement_id": 0, 00:13:10.997 "enable_zerocopy_send_server": false, 00:13:10.997 "enable_zerocopy_send_client": false, 00:13:10.997 "zerocopy_threshold": 0, 00:13:10.997 "tls_version": 0, 00:13:10.997 "enable_ktls": false 00:13:10.997 } 00:13:10.997 }, 00:13:10.997 { 00:13:10.997 "method": "sock_impl_set_options", 00:13:10.997 "params": { 00:13:10.997 "impl_name": "posix", 00:13:10.997 "recv_buf_size": 2097152, 00:13:10.997 "send_buf_size": 2097152, 00:13:10.997 "enable_recv_pipe": true, 00:13:10.997 "enable_quickack": false, 00:13:10.997 "enable_placement_id": 0, 00:13:10.997 "enable_zerocopy_send_server": true, 00:13:10.997 "enable_zerocopy_send_client": false, 00:13:10.997 "zerocopy_threshold": 0, 00:13:10.997 "tls_version": 0, 00:13:10.997 "enable_ktls": false 00:13:10.997 } 00:13:10.997 }, 00:13:10.997 { 00:13:10.997 "method": "sock_impl_set_options", 00:13:10.997 "params": { 00:13:10.997 "impl_name": "ssl", 00:13:10.997 "recv_buf_size": 4096, 00:13:10.997 "send_buf_size": 4096, 00:13:10.997 "enable_recv_pipe": true, 00:13:10.997 "enable_quickack": false, 00:13:10.997 "enable_placement_id": 0, 00:13:10.997 "enable_zerocopy_send_server": true, 00:13:10.997 "enable_zerocopy_send_client": false, 00:13:10.997 "zerocopy_threshold": 0, 00:13:10.997 "tls_version": 0, 00:13:10.997 "enable_ktls": false 00:13:10.997 } 00:13:10.997 } 00:13:10.997 ] 00:13:10.997 }, 00:13:10.997 { 00:13:10.997 "subsystem": "vmd", 00:13:10.997 "config": [] 00:13:10.997 }, 00:13:10.997 { 00:13:10.997 "subsystem": "accel", 00:13:10.997 "config": [ 00:13:10.997 { 00:13:10.997 "method": "accel_set_options", 00:13:10.997 "params": { 00:13:10.997 "small_cache_size": 128, 00:13:10.997 "large_cache_size": 16, 00:13:10.997 "task_count": 2048, 00:13:10.997 "sequence_count": 2048, 00:13:10.997 "buf_count": 2048 00:13:10.997 } 00:13:10.997 } 00:13:10.997 ] 00:13:10.997 }, 00:13:10.997 { 00:13:10.997 "subsystem": "bdev", 00:13:10.997 "config": [ 00:13:10.997 { 00:13:10.997 "method": "bdev_set_options", 00:13:10.997 "params": { 00:13:10.997 "bdev_io_pool_size": 65535, 00:13:10.997 "bdev_io_cache_size": 256, 00:13:10.997 "bdev_auto_examine": true, 00:13:10.997 "iobuf_small_cache_size": 128, 00:13:10.997 "iobuf_large_cache_size": 16 00:13:10.997 } 00:13:10.997 }, 00:13:10.997 { 00:13:10.997 "method": "bdev_raid_set_options", 00:13:10.997 "params": { 00:13:10.997 "process_window_size_kb": 1024 00:13:10.997 } 00:13:10.997 }, 00:13:10.997 { 00:13:10.997 "method": "bdev_iscsi_set_options", 00:13:10.997 "params": { 00:13:10.997 "timeout_sec": 30 00:13:10.997 } 00:13:10.997 }, 00:13:10.997 { 00:13:10.997 "method": "bdev_nvme_set_options", 00:13:10.998 "params": { 00:13:10.998 "action_on_timeout": "none", 00:13:10.998 "timeout_us": 0, 00:13:10.998 "timeout_admin_us": 0, 00:13:10.998 "keep_alive_timeout_ms": 10000, 00:13:10.998 "transport_retry_count": 4, 00:13:10.998 "arbitration_burst": 0, 00:13:10.998 "low_priority_weight": 0, 00:13:10.998 "medium_priority_weight": 0, 00:13:10.998 "high_priority_weight": 0, 00:13:10.998 "nvme_adminq_poll_period_us": 10000, 00:13:10.998 "nvme_ioq_poll_period_us": 0, 00:13:10.998 "io_queue_requests": 512, 00:13:10.998 "delay_cmd_submit": true, 00:13:10.998 "bdev_retry_count": 3, 00:13:10.998 "transport_ack_timeout": 0, 00:13:10.998 "ctrlr_loss_timeout_sec": 0, 00:13:10.998 "reconnect_delay_sec": 0, 00:13:10.998 "fast_io_fail_timeout_sec": 0, 00:13:10.998 "generate_uuids": false, 00:13:10.998 "transport_tos": 0, 00:13:10.998 "io_path_stat": false, 00:13:10.998 "allow_accel_sequence": false 00:13:10.998 } 00:13:10.998 }, 00:13:10.998 { 00:13:10.998 "method": "bdev_nvme_attach_controller", 00:13:10.998 "params": { 00:13:10.998 "name": "TLSTEST", 00:13:10.998 "trtype": "TCP", 00:13:10.998 "adrfam": "IPv4", 00:13:10.998 "traddr": "10.0.0.2", 00:13:10.998 "trsvcid": "4420", 00:13:10.998 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:10.998 "prchk_reftag": false, 00:13:10.998 "prchk_guard": false, 00:13:10.998 "ctrlr_loss_timeout_sec": 0, 00:13:10.998 "reconnect_delay_sec": 0, 00:13:10.998 "fast_io_fail_timeout_sec": 0, 00:13:10.998 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:13:10.998 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:10.998 "hdgst": false, 00:13:10.998 "ddgst": false 00:13:10.998 } 00:13:10.998 }, 00:13:10.998 { 00:13:10.998 "method": "bdev_nvme_set_hotplug", 00:13:10.998 "params": { 00:13:10.998 "period_us": 100000, 00:13:10.998 "enable": false 00:13:10.998 } 00:13:10.998 }, 00:13:10.998 { 00:13:10.998 "method": "bdev_wait_for_examine" 00:13:10.998 } 00:13:10.998 ] 00:13:10.998 }, 00:13:10.998 { 00:13:10.998 "subsystem": "nbd", 00:13:10.998 "config": [] 00:13:10.998 } 00:13:10.998 ] 00:13:10.998 }' 00:13:10.998 [2024-12-08 05:12:00.563079] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:10.998 [2024-12-08 05:12:00.563344] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77664 ] 00:13:10.998 [2024-12-08 05:12:00.701072] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:10.998 [2024-12-08 05:12:00.741465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:11.275 [2024-12-08 05:12:00.868744] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:12.210 05:12:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:12.210 05:12:01 -- common/autotest_common.sh@862 -- # return 0 00:13:12.210 05:12:01 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:12.210 Running I/O for 10 seconds... 00:13:22.177 00:13:22.177 Latency(us) 00:13:22.177 [2024-12-08T05:12:11.963Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:22.177 [2024-12-08T05:12:11.963Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:22.177 Verification LBA range: start 0x0 length 0x2000 00:13:22.177 TLSTESTn1 : 10.02 5249.73 20.51 0.00 0.00 24343.49 4855.62 32648.84 00:13:22.177 [2024-12-08T05:12:11.963Z] =================================================================================================================== 00:13:22.177 [2024-12-08T05:12:11.963Z] Total : 5249.73 20.51 0.00 0.00 24343.49 4855.62 32648.84 00:13:22.177 0 00:13:22.177 05:12:11 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:22.177 05:12:11 -- target/tls.sh@223 -- # killprocess 77664 00:13:22.177 05:12:11 -- common/autotest_common.sh@936 -- # '[' -z 77664 ']' 00:13:22.177 05:12:11 -- common/autotest_common.sh@940 -- # kill -0 77664 00:13:22.177 05:12:11 -- common/autotest_common.sh@941 -- # uname 00:13:22.177 05:12:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:22.177 05:12:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77664 00:13:22.177 killing process with pid 77664 00:13:22.177 Received shutdown signal, test time was about 10.000000 seconds 00:13:22.177 00:13:22.177 Latency(us) 00:13:22.177 [2024-12-08T05:12:11.963Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:22.177 [2024-12-08T05:12:11.963Z] =================================================================================================================== 00:13:22.177 [2024-12-08T05:12:11.963Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:22.177 05:12:11 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:13:22.177 05:12:11 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:13:22.177 05:12:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77664' 00:13:22.177 05:12:11 -- common/autotest_common.sh@955 -- # kill 77664 00:13:22.177 05:12:11 -- common/autotest_common.sh@960 -- # wait 77664 00:13:22.436 05:12:11 -- target/tls.sh@224 -- # killprocess 77630 00:13:22.436 05:12:11 -- common/autotest_common.sh@936 -- # '[' -z 77630 ']' 00:13:22.436 05:12:11 -- common/autotest_common.sh@940 -- # kill -0 77630 00:13:22.436 05:12:11 -- common/autotest_common.sh@941 -- # uname 00:13:22.436 05:12:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:22.436 05:12:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77630 00:13:22.436 killing process with pid 77630 00:13:22.436 05:12:12 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:22.436 05:12:12 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:22.436 05:12:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77630' 00:13:22.436 05:12:12 -- common/autotest_common.sh@955 -- # kill 77630 00:13:22.436 05:12:12 -- common/autotest_common.sh@960 -- # wait 77630 00:13:22.436 05:12:12 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:13:22.436 05:12:12 -- target/tls.sh@227 -- # cleanup 00:13:22.436 05:12:12 -- target/tls.sh@15 -- # process_shm --id 0 00:13:22.436 05:12:12 -- common/autotest_common.sh@806 -- # type=--id 00:13:22.436 05:12:12 -- common/autotest_common.sh@807 -- # id=0 00:13:22.436 05:12:12 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:13:22.436 05:12:12 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:22.436 05:12:12 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:13:22.436 05:12:12 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:13:22.436 05:12:12 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:13:22.436 05:12:12 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:22.436 nvmf_trace.0 00:13:22.694 05:12:12 -- common/autotest_common.sh@821 -- # return 0 00:13:22.694 05:12:12 -- target/tls.sh@16 -- # killprocess 77664 00:13:22.694 05:12:12 -- common/autotest_common.sh@936 -- # '[' -z 77664 ']' 00:13:22.694 Process with pid 77664 is not found 00:13:22.695 05:12:12 -- common/autotest_common.sh@940 -- # kill -0 77664 00:13:22.695 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (77664) - No such process 00:13:22.695 05:12:12 -- common/autotest_common.sh@963 -- # echo 'Process with pid 77664 is not found' 00:13:22.695 05:12:12 -- target/tls.sh@17 -- # nvmftestfini 00:13:22.695 05:12:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:22.695 05:12:12 -- nvmf/common.sh@116 -- # sync 00:13:22.695 05:12:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:22.695 05:12:12 -- nvmf/common.sh@119 -- # set +e 00:13:22.695 05:12:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:22.695 05:12:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:22.695 rmmod nvme_tcp 00:13:22.695 rmmod nvme_fabrics 00:13:22.695 rmmod nvme_keyring 00:13:22.695 05:12:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:22.695 05:12:12 -- nvmf/common.sh@123 -- # set -e 00:13:22.695 05:12:12 -- nvmf/common.sh@124 -- # return 0 00:13:22.695 05:12:12 -- nvmf/common.sh@477 -- # '[' -n 77630 ']' 00:13:22.695 05:12:12 -- nvmf/common.sh@478 -- # killprocess 77630 00:13:22.695 05:12:12 -- common/autotest_common.sh@936 -- # '[' -z 77630 ']' 00:13:22.695 Process with pid 77630 is not found 00:13:22.695 05:12:12 -- common/autotest_common.sh@940 -- # kill -0 77630 00:13:22.695 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (77630) - No such process 00:13:22.695 05:12:12 -- common/autotest_common.sh@963 -- # echo 'Process with pid 77630 is not found' 00:13:22.695 05:12:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:22.695 05:12:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:22.695 05:12:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:22.695 05:12:12 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:22.695 05:12:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:22.695 05:12:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:22.695 05:12:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:22.695 05:12:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:22.695 05:12:12 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:22.695 05:12:12 -- target/tls.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:13:22.695 00:13:22.695 real 1m9.168s 00:13:22.695 user 1m47.396s 00:13:22.695 sys 0m23.768s 00:13:22.695 05:12:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:22.695 05:12:12 -- common/autotest_common.sh@10 -- # set +x 00:13:22.695 ************************************ 00:13:22.695 END TEST nvmf_tls 00:13:22.695 ************************************ 00:13:22.695 05:12:12 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:13:22.695 05:12:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:22.695 05:12:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:22.695 05:12:12 -- common/autotest_common.sh@10 -- # set +x 00:13:22.695 ************************************ 00:13:22.695 START TEST nvmf_fips 00:13:22.695 ************************************ 00:13:22.695 05:12:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:13:22.954 * Looking for test storage... 00:13:22.954 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:13:22.954 05:12:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:22.954 05:12:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:22.954 05:12:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:22.954 05:12:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:22.954 05:12:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:22.954 05:12:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:22.954 05:12:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:22.954 05:12:12 -- scripts/common.sh@335 -- # IFS=.-: 00:13:22.954 05:12:12 -- scripts/common.sh@335 -- # read -ra ver1 00:13:22.954 05:12:12 -- scripts/common.sh@336 -- # IFS=.-: 00:13:22.954 05:12:12 -- scripts/common.sh@336 -- # read -ra ver2 00:13:22.954 05:12:12 -- scripts/common.sh@337 -- # local 'op=<' 00:13:22.954 05:12:12 -- scripts/common.sh@339 -- # ver1_l=2 00:13:22.954 05:12:12 -- scripts/common.sh@340 -- # ver2_l=1 00:13:22.954 05:12:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:22.954 05:12:12 -- scripts/common.sh@343 -- # case "$op" in 00:13:22.954 05:12:12 -- scripts/common.sh@344 -- # : 1 00:13:22.954 05:12:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:22.954 05:12:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:22.954 05:12:12 -- scripts/common.sh@364 -- # decimal 1 00:13:22.954 05:12:12 -- scripts/common.sh@352 -- # local d=1 00:13:22.954 05:12:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:22.954 05:12:12 -- scripts/common.sh@354 -- # echo 1 00:13:22.954 05:12:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:22.954 05:12:12 -- scripts/common.sh@365 -- # decimal 2 00:13:22.954 05:12:12 -- scripts/common.sh@352 -- # local d=2 00:13:22.954 05:12:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:22.954 05:12:12 -- scripts/common.sh@354 -- # echo 2 00:13:22.954 05:12:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:22.954 05:12:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:22.954 05:12:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:22.954 05:12:12 -- scripts/common.sh@367 -- # return 0 00:13:22.954 05:12:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:22.954 05:12:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:22.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.954 --rc genhtml_branch_coverage=1 00:13:22.954 --rc genhtml_function_coverage=1 00:13:22.954 --rc genhtml_legend=1 00:13:22.954 --rc geninfo_all_blocks=1 00:13:22.954 --rc geninfo_unexecuted_blocks=1 00:13:22.954 00:13:22.954 ' 00:13:22.954 05:12:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:22.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.954 --rc genhtml_branch_coverage=1 00:13:22.954 --rc genhtml_function_coverage=1 00:13:22.954 --rc genhtml_legend=1 00:13:22.954 --rc geninfo_all_blocks=1 00:13:22.954 --rc geninfo_unexecuted_blocks=1 00:13:22.954 00:13:22.954 ' 00:13:22.954 05:12:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:22.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.954 --rc genhtml_branch_coverage=1 00:13:22.954 --rc genhtml_function_coverage=1 00:13:22.954 --rc genhtml_legend=1 00:13:22.954 --rc geninfo_all_blocks=1 00:13:22.954 --rc geninfo_unexecuted_blocks=1 00:13:22.954 00:13:22.954 ' 00:13:22.954 05:12:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:22.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.954 --rc genhtml_branch_coverage=1 00:13:22.954 --rc genhtml_function_coverage=1 00:13:22.954 --rc genhtml_legend=1 00:13:22.954 --rc geninfo_all_blocks=1 00:13:22.954 --rc geninfo_unexecuted_blocks=1 00:13:22.955 00:13:22.955 ' 00:13:22.955 05:12:12 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:22.955 05:12:12 -- nvmf/common.sh@7 -- # uname -s 00:13:22.955 05:12:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:22.955 05:12:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:22.955 05:12:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:22.955 05:12:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:22.955 05:12:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:22.955 05:12:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:22.955 05:12:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:22.955 05:12:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:22.955 05:12:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:22.955 05:12:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:22.955 05:12:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:13:22.955 05:12:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:13:22.955 05:12:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:22.955 05:12:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:22.955 05:12:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:22.955 05:12:12 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:22.955 05:12:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:22.955 05:12:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:22.955 05:12:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:22.955 05:12:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.955 05:12:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.955 05:12:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.955 05:12:12 -- paths/export.sh@5 -- # export PATH 00:13:22.955 05:12:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.955 05:12:12 -- nvmf/common.sh@46 -- # : 0 00:13:22.955 05:12:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:22.955 05:12:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:22.955 05:12:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:22.955 05:12:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:22.955 05:12:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:22.955 05:12:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:22.955 05:12:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:22.955 05:12:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:22.955 05:12:12 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:22.955 05:12:12 -- fips/fips.sh@89 -- # check_openssl_version 00:13:22.955 05:12:12 -- fips/fips.sh@83 -- # local target=3.0.0 00:13:22.955 05:12:12 -- fips/fips.sh@85 -- # openssl version 00:13:22.955 05:12:12 -- fips/fips.sh@85 -- # awk '{print $2}' 00:13:22.955 05:12:12 -- fips/fips.sh@85 -- # ge 3.1.1 3.0.0 00:13:22.955 05:12:12 -- scripts/common.sh@375 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:13:22.955 05:12:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:22.955 05:12:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:22.955 05:12:12 -- scripts/common.sh@335 -- # IFS=.-: 00:13:22.955 05:12:12 -- scripts/common.sh@335 -- # read -ra ver1 00:13:22.955 05:12:12 -- scripts/common.sh@336 -- # IFS=.-: 00:13:22.955 05:12:12 -- scripts/common.sh@336 -- # read -ra ver2 00:13:22.955 05:12:12 -- scripts/common.sh@337 -- # local 'op=>=' 00:13:22.955 05:12:12 -- scripts/common.sh@339 -- # ver1_l=3 00:13:22.955 05:12:12 -- scripts/common.sh@340 -- # ver2_l=3 00:13:22.955 05:12:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:22.955 05:12:12 -- scripts/common.sh@343 -- # case "$op" in 00:13:22.955 05:12:12 -- scripts/common.sh@347 -- # : 1 00:13:22.955 05:12:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:22.955 05:12:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:22.955 05:12:12 -- scripts/common.sh@364 -- # decimal 3 00:13:22.955 05:12:12 -- scripts/common.sh@352 -- # local d=3 00:13:22.955 05:12:12 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:13:22.955 05:12:12 -- scripts/common.sh@354 -- # echo 3 00:13:22.955 05:12:12 -- scripts/common.sh@364 -- # ver1[v]=3 00:13:22.955 05:12:12 -- scripts/common.sh@365 -- # decimal 3 00:13:22.955 05:12:12 -- scripts/common.sh@352 -- # local d=3 00:13:22.955 05:12:12 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:13:22.955 05:12:12 -- scripts/common.sh@354 -- # echo 3 00:13:22.955 05:12:12 -- scripts/common.sh@365 -- # ver2[v]=3 00:13:22.955 05:12:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:22.955 05:12:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:22.955 05:12:12 -- scripts/common.sh@363 -- # (( v++ )) 00:13:22.955 05:12:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:22.955 05:12:12 -- scripts/common.sh@364 -- # decimal 1 00:13:22.955 05:12:12 -- scripts/common.sh@352 -- # local d=1 00:13:22.955 05:12:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:22.955 05:12:12 -- scripts/common.sh@354 -- # echo 1 00:13:22.955 05:12:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:22.955 05:12:12 -- scripts/common.sh@365 -- # decimal 0 00:13:22.955 05:12:12 -- scripts/common.sh@352 -- # local d=0 00:13:22.955 05:12:12 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:22.955 05:12:12 -- scripts/common.sh@354 -- # echo 0 00:13:22.955 05:12:12 -- scripts/common.sh@365 -- # ver2[v]=0 00:13:22.955 05:12:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:22.955 05:12:12 -- scripts/common.sh@366 -- # return 0 00:13:22.955 05:12:12 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:13:22.955 05:12:12 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:13:22.955 05:12:12 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:13:22.955 05:12:12 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:13:22.956 05:12:12 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:13:22.956 05:12:12 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:13:22.956 05:12:12 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:13:22.956 05:12:12 -- fips/fips.sh@113 -- # build_openssl_config 00:13:22.956 05:12:12 -- fips/fips.sh@37 -- # cat 00:13:23.214 05:12:12 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:13:23.214 05:12:12 -- fips/fips.sh@58 -- # cat - 00:13:23.214 05:12:12 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:13:23.214 05:12:12 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:13:23.214 05:12:12 -- fips/fips.sh@116 -- # mapfile -t providers 00:13:23.214 05:12:12 -- fips/fips.sh@116 -- # grep name 00:13:23.214 05:12:12 -- fips/fips.sh@116 -- # openssl list -providers 00:13:23.214 05:12:12 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:13:23.214 05:12:12 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:13:23.214 05:12:12 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:13:23.214 05:12:12 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:13:23.214 05:12:12 -- fips/fips.sh@127 -- # : 00:13:23.214 05:12:12 -- common/autotest_common.sh@650 -- # local es=0 00:13:23.214 05:12:12 -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:13:23.214 05:12:12 -- common/autotest_common.sh@638 -- # local arg=openssl 00:13:23.214 05:12:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:23.214 05:12:12 -- common/autotest_common.sh@642 -- # type -t openssl 00:13:23.214 05:12:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:23.214 05:12:12 -- common/autotest_common.sh@644 -- # type -P openssl 00:13:23.214 05:12:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:23.214 05:12:12 -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:13:23.214 05:12:12 -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:13:23.214 05:12:12 -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:13:23.214 Error setting digest 00:13:23.214 40A2A4C1487F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:13:23.214 40A2A4C1487F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:13:23.214 05:12:12 -- common/autotest_common.sh@653 -- # es=1 00:13:23.214 05:12:12 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:23.214 05:12:12 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:23.214 05:12:12 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:23.214 05:12:12 -- fips/fips.sh@130 -- # nvmftestinit 00:13:23.214 05:12:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:23.214 05:12:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:23.214 05:12:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:23.214 05:12:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:23.214 05:12:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:23.214 05:12:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:23.214 05:12:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:23.214 05:12:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:23.214 05:12:12 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:23.214 05:12:12 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:23.214 05:12:12 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:23.214 05:12:12 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:23.214 05:12:12 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:23.214 05:12:12 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:23.214 05:12:12 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:23.214 05:12:12 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:23.214 05:12:12 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:23.214 05:12:12 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:23.214 05:12:12 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:23.214 05:12:12 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:23.214 05:12:12 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:23.214 05:12:12 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:23.214 05:12:12 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:23.214 05:12:12 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:23.214 05:12:12 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:23.215 05:12:12 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:23.215 05:12:12 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:23.215 05:12:12 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:23.215 Cannot find device "nvmf_tgt_br" 00:13:23.215 05:12:12 -- nvmf/common.sh@154 -- # true 00:13:23.215 05:12:12 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:23.215 Cannot find device "nvmf_tgt_br2" 00:13:23.215 05:12:12 -- nvmf/common.sh@155 -- # true 00:13:23.215 05:12:12 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:23.215 05:12:12 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:23.215 Cannot find device "nvmf_tgt_br" 00:13:23.215 05:12:12 -- nvmf/common.sh@157 -- # true 00:13:23.215 05:12:12 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:23.215 Cannot find device "nvmf_tgt_br2" 00:13:23.215 05:12:12 -- nvmf/common.sh@158 -- # true 00:13:23.215 05:12:12 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:23.215 05:12:12 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:23.215 05:12:12 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:23.215 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:23.215 05:12:12 -- nvmf/common.sh@161 -- # true 00:13:23.215 05:12:12 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:23.215 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:23.215 05:12:12 -- nvmf/common.sh@162 -- # true 00:13:23.215 05:12:12 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:23.473 05:12:13 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:23.473 05:12:13 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:23.473 05:12:13 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:23.473 05:12:13 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:23.473 05:12:13 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:23.473 05:12:13 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:23.473 05:12:13 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:23.473 05:12:13 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:23.473 05:12:13 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:23.473 05:12:13 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:23.473 05:12:13 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:23.473 05:12:13 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:23.473 05:12:13 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:23.473 05:12:13 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:23.473 05:12:13 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:23.473 05:12:13 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:23.473 05:12:13 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:23.473 05:12:13 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:23.473 05:12:13 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:23.473 05:12:13 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:23.473 05:12:13 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:23.473 05:12:13 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:23.473 05:12:13 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:23.473 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:23.473 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:13:23.473 00:13:23.473 --- 10.0.0.2 ping statistics --- 00:13:23.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:23.473 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:13:23.473 05:12:13 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:23.473 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:23.473 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:13:23.473 00:13:23.473 --- 10.0.0.3 ping statistics --- 00:13:23.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:23.473 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:13:23.473 05:12:13 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:23.473 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:23.473 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:13:23.473 00:13:23.473 --- 10.0.0.1 ping statistics --- 00:13:23.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:23.473 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:13:23.473 05:12:13 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:23.473 05:12:13 -- nvmf/common.sh@421 -- # return 0 00:13:23.473 05:12:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:23.473 05:12:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:23.473 05:12:13 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:23.473 05:12:13 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:23.473 05:12:13 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:23.473 05:12:13 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:23.473 05:12:13 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:23.473 05:12:13 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:13:23.473 05:12:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:23.473 05:12:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:23.473 05:12:13 -- common/autotest_common.sh@10 -- # set +x 00:13:23.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:23.473 05:12:13 -- nvmf/common.sh@469 -- # nvmfpid=78022 00:13:23.473 05:12:13 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:23.473 05:12:13 -- nvmf/common.sh@470 -- # waitforlisten 78022 00:13:23.473 05:12:13 -- common/autotest_common.sh@829 -- # '[' -z 78022 ']' 00:13:23.473 05:12:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:23.473 05:12:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:23.473 05:12:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:23.473 05:12:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:23.473 05:12:13 -- common/autotest_common.sh@10 -- # set +x 00:13:23.731 [2024-12-08 05:12:13.291056] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:23.731 [2024-12-08 05:12:13.291161] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:23.731 [2024-12-08 05:12:13.428943] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:23.731 [2024-12-08 05:12:13.467538] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:23.731 [2024-12-08 05:12:13.467807] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:23.731 [2024-12-08 05:12:13.467836] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:23.731 [2024-12-08 05:12:13.467853] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:23.731 [2024-12-08 05:12:13.467892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:24.693 05:12:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:24.693 05:12:14 -- common/autotest_common.sh@862 -- # return 0 00:13:24.693 05:12:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:24.693 05:12:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:24.693 05:12:14 -- common/autotest_common.sh@10 -- # set +x 00:13:24.693 05:12:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:24.693 05:12:14 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:13:24.693 05:12:14 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:13:24.693 05:12:14 -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:13:24.693 05:12:14 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:13:24.693 05:12:14 -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:13:24.693 05:12:14 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:13:24.693 05:12:14 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:13:24.693 05:12:14 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:24.961 [2024-12-08 05:12:14.641834] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:24.961 [2024-12-08 05:12:14.657778] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:24.961 [2024-12-08 05:12:14.658089] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:24.961 malloc0 00:13:24.961 05:12:14 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:24.961 05:12:14 -- fips/fips.sh@147 -- # bdevperf_pid=78056 00:13:24.961 05:12:14 -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:24.961 05:12:14 -- fips/fips.sh@148 -- # waitforlisten 78056 /var/tmp/bdevperf.sock 00:13:24.961 05:12:14 -- common/autotest_common.sh@829 -- # '[' -z 78056 ']' 00:13:24.961 05:12:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:24.961 05:12:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:24.961 05:12:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:24.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:24.961 05:12:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:24.961 05:12:14 -- common/autotest_common.sh@10 -- # set +x 00:13:25.219 [2024-12-08 05:12:14.778769] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:25.219 [2024-12-08 05:12:14.778857] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78056 ] 00:13:25.219 [2024-12-08 05:12:14.918540] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.219 [2024-12-08 05:12:14.959425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:26.153 05:12:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:26.153 05:12:15 -- common/autotest_common.sh@862 -- # return 0 00:13:26.153 05:12:15 -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:13:26.412 [2024-12-08 05:12:16.013375] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:26.412 TLSTESTn1 00:13:26.412 05:12:16 -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:26.670 Running I/O for 10 seconds... 00:13:36.642 00:13:36.642 Latency(us) 00:13:36.642 [2024-12-08T05:12:26.428Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:36.642 [2024-12-08T05:12:26.428Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:36.642 Verification LBA range: start 0x0 length 0x2000 00:13:36.642 TLSTESTn1 : 10.02 5363.32 20.95 0.00 0.00 23825.77 6404.65 30980.65 00:13:36.642 [2024-12-08T05:12:26.428Z] =================================================================================================================== 00:13:36.642 [2024-12-08T05:12:26.428Z] Total : 5363.32 20.95 0.00 0.00 23825.77 6404.65 30980.65 00:13:36.642 0 00:13:36.642 05:12:26 -- fips/fips.sh@1 -- # cleanup 00:13:36.642 05:12:26 -- fips/fips.sh@15 -- # process_shm --id 0 00:13:36.642 05:12:26 -- common/autotest_common.sh@806 -- # type=--id 00:13:36.642 05:12:26 -- common/autotest_common.sh@807 -- # id=0 00:13:36.642 05:12:26 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:13:36.642 05:12:26 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:36.642 05:12:26 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:13:36.642 05:12:26 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:13:36.642 05:12:26 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:13:36.642 05:12:26 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:36.642 nvmf_trace.0 00:13:36.642 05:12:26 -- common/autotest_common.sh@821 -- # return 0 00:13:36.642 05:12:26 -- fips/fips.sh@16 -- # killprocess 78056 00:13:36.642 05:12:26 -- common/autotest_common.sh@936 -- # '[' -z 78056 ']' 00:13:36.642 05:12:26 -- common/autotest_common.sh@940 -- # kill -0 78056 00:13:36.642 05:12:26 -- common/autotest_common.sh@941 -- # uname 00:13:36.642 05:12:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:36.642 05:12:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78056 00:13:36.642 killing process with pid 78056 00:13:36.642 Received shutdown signal, test time was about 10.000000 seconds 00:13:36.642 00:13:36.642 Latency(us) 00:13:36.642 [2024-12-08T05:12:26.428Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:36.642 [2024-12-08T05:12:26.428Z] =================================================================================================================== 00:13:36.642 [2024-12-08T05:12:26.428Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:36.642 05:12:26 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:13:36.642 05:12:26 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:13:36.642 05:12:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78056' 00:13:36.642 05:12:26 -- common/autotest_common.sh@955 -- # kill 78056 00:13:36.642 05:12:26 -- common/autotest_common.sh@960 -- # wait 78056 00:13:36.901 05:12:26 -- fips/fips.sh@17 -- # nvmftestfini 00:13:36.901 05:12:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:36.901 05:12:26 -- nvmf/common.sh@116 -- # sync 00:13:36.901 05:12:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:36.901 05:12:26 -- nvmf/common.sh@119 -- # set +e 00:13:36.901 05:12:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:36.901 05:12:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:36.901 rmmod nvme_tcp 00:13:36.901 rmmod nvme_fabrics 00:13:36.901 rmmod nvme_keyring 00:13:36.901 05:12:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:36.901 05:12:26 -- nvmf/common.sh@123 -- # set -e 00:13:36.901 05:12:26 -- nvmf/common.sh@124 -- # return 0 00:13:36.901 05:12:26 -- nvmf/common.sh@477 -- # '[' -n 78022 ']' 00:13:36.901 05:12:26 -- nvmf/common.sh@478 -- # killprocess 78022 00:13:36.901 05:12:26 -- common/autotest_common.sh@936 -- # '[' -z 78022 ']' 00:13:36.901 05:12:26 -- common/autotest_common.sh@940 -- # kill -0 78022 00:13:36.901 05:12:26 -- common/autotest_common.sh@941 -- # uname 00:13:36.901 05:12:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:36.901 05:12:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78022 00:13:36.901 killing process with pid 78022 00:13:36.901 05:12:26 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:36.901 05:12:26 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:36.901 05:12:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78022' 00:13:36.901 05:12:26 -- common/autotest_common.sh@955 -- # kill 78022 00:13:36.901 05:12:26 -- common/autotest_common.sh@960 -- # wait 78022 00:13:37.160 05:12:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:37.160 05:12:26 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:37.160 05:12:26 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:37.160 05:12:26 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:37.160 05:12:26 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:37.160 05:12:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:37.160 05:12:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:37.160 05:12:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:37.160 05:12:26 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:37.160 05:12:26 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:13:37.160 ************************************ 00:13:37.160 END TEST nvmf_fips 00:13:37.160 ************************************ 00:13:37.160 00:13:37.160 real 0m14.433s 00:13:37.160 user 0m19.549s 00:13:37.160 sys 0m5.868s 00:13:37.160 05:12:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:37.160 05:12:26 -- common/autotest_common.sh@10 -- # set +x 00:13:37.160 05:12:26 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:13:37.160 05:12:26 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:13:37.160 05:12:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:37.160 05:12:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:37.160 05:12:26 -- common/autotest_common.sh@10 -- # set +x 00:13:37.160 ************************************ 00:13:37.160 START TEST nvmf_fuzz 00:13:37.160 ************************************ 00:13:37.160 05:12:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:13:37.445 * Looking for test storage... 00:13:37.445 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:37.445 05:12:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:37.445 05:12:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:37.445 05:12:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:37.445 05:12:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:37.445 05:12:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:37.445 05:12:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:37.445 05:12:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:37.445 05:12:27 -- scripts/common.sh@335 -- # IFS=.-: 00:13:37.445 05:12:27 -- scripts/common.sh@335 -- # read -ra ver1 00:13:37.445 05:12:27 -- scripts/common.sh@336 -- # IFS=.-: 00:13:37.445 05:12:27 -- scripts/common.sh@336 -- # read -ra ver2 00:13:37.445 05:12:27 -- scripts/common.sh@337 -- # local 'op=<' 00:13:37.445 05:12:27 -- scripts/common.sh@339 -- # ver1_l=2 00:13:37.445 05:12:27 -- scripts/common.sh@340 -- # ver2_l=1 00:13:37.445 05:12:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:37.445 05:12:27 -- scripts/common.sh@343 -- # case "$op" in 00:13:37.445 05:12:27 -- scripts/common.sh@344 -- # : 1 00:13:37.445 05:12:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:37.445 05:12:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:37.445 05:12:27 -- scripts/common.sh@364 -- # decimal 1 00:13:37.445 05:12:27 -- scripts/common.sh@352 -- # local d=1 00:13:37.446 05:12:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:37.446 05:12:27 -- scripts/common.sh@354 -- # echo 1 00:13:37.446 05:12:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:37.446 05:12:27 -- scripts/common.sh@365 -- # decimal 2 00:13:37.446 05:12:27 -- scripts/common.sh@352 -- # local d=2 00:13:37.446 05:12:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:37.446 05:12:27 -- scripts/common.sh@354 -- # echo 2 00:13:37.446 05:12:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:37.446 05:12:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:37.446 05:12:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:37.446 05:12:27 -- scripts/common.sh@367 -- # return 0 00:13:37.446 05:12:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:37.446 05:12:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:37.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:37.446 --rc genhtml_branch_coverage=1 00:13:37.446 --rc genhtml_function_coverage=1 00:13:37.446 --rc genhtml_legend=1 00:13:37.446 --rc geninfo_all_blocks=1 00:13:37.446 --rc geninfo_unexecuted_blocks=1 00:13:37.446 00:13:37.446 ' 00:13:37.446 05:12:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:37.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:37.446 --rc genhtml_branch_coverage=1 00:13:37.446 --rc genhtml_function_coverage=1 00:13:37.446 --rc genhtml_legend=1 00:13:37.446 --rc geninfo_all_blocks=1 00:13:37.446 --rc geninfo_unexecuted_blocks=1 00:13:37.446 00:13:37.446 ' 00:13:37.446 05:12:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:37.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:37.446 --rc genhtml_branch_coverage=1 00:13:37.446 --rc genhtml_function_coverage=1 00:13:37.446 --rc genhtml_legend=1 00:13:37.446 --rc geninfo_all_blocks=1 00:13:37.446 --rc geninfo_unexecuted_blocks=1 00:13:37.446 00:13:37.446 ' 00:13:37.446 05:12:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:37.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:37.446 --rc genhtml_branch_coverage=1 00:13:37.446 --rc genhtml_function_coverage=1 00:13:37.446 --rc genhtml_legend=1 00:13:37.446 --rc geninfo_all_blocks=1 00:13:37.446 --rc geninfo_unexecuted_blocks=1 00:13:37.446 00:13:37.446 ' 00:13:37.446 05:12:27 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:37.446 05:12:27 -- nvmf/common.sh@7 -- # uname -s 00:13:37.446 05:12:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:37.446 05:12:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:37.446 05:12:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:37.446 05:12:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:37.446 05:12:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:37.446 05:12:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:37.446 05:12:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:37.446 05:12:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:37.446 05:12:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:37.446 05:12:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:37.446 05:12:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:13:37.446 05:12:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:13:37.446 05:12:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:37.446 05:12:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:37.446 05:12:27 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:37.446 05:12:27 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:37.446 05:12:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:37.446 05:12:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:37.446 05:12:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:37.446 05:12:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.446 05:12:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.446 05:12:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.446 05:12:27 -- paths/export.sh@5 -- # export PATH 00:13:37.446 05:12:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.446 05:12:27 -- nvmf/common.sh@46 -- # : 0 00:13:37.446 05:12:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:37.446 05:12:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:37.446 05:12:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:37.446 05:12:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:37.446 05:12:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:37.446 05:12:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:37.446 05:12:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:37.446 05:12:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:37.446 05:12:27 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:13:37.446 05:12:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:37.446 05:12:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:37.446 05:12:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:37.446 05:12:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:37.446 05:12:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:37.446 05:12:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:37.446 05:12:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:37.446 05:12:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:37.446 05:12:27 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:37.446 05:12:27 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:37.446 05:12:27 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:37.446 05:12:27 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:37.446 05:12:27 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:37.446 05:12:27 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:37.446 05:12:27 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:37.446 05:12:27 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:37.446 05:12:27 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:37.446 05:12:27 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:37.446 05:12:27 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:37.446 05:12:27 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:37.446 05:12:27 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:37.446 05:12:27 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:37.446 05:12:27 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:37.446 05:12:27 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:37.446 05:12:27 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:37.446 05:12:27 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:37.446 05:12:27 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:37.446 05:12:27 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:37.446 Cannot find device "nvmf_tgt_br" 00:13:37.446 05:12:27 -- nvmf/common.sh@154 -- # true 00:13:37.446 05:12:27 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:37.446 Cannot find device "nvmf_tgt_br2" 00:13:37.446 05:12:27 -- nvmf/common.sh@155 -- # true 00:13:37.446 05:12:27 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:37.446 05:12:27 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:37.446 Cannot find device "nvmf_tgt_br" 00:13:37.446 05:12:27 -- nvmf/common.sh@157 -- # true 00:13:37.446 05:12:27 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:37.446 Cannot find device "nvmf_tgt_br2" 00:13:37.446 05:12:27 -- nvmf/common.sh@158 -- # true 00:13:37.446 05:12:27 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:37.705 05:12:27 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:37.705 05:12:27 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:37.705 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:37.705 05:12:27 -- nvmf/common.sh@161 -- # true 00:13:37.705 05:12:27 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:37.705 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:37.705 05:12:27 -- nvmf/common.sh@162 -- # true 00:13:37.705 05:12:27 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:37.705 05:12:27 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:37.705 05:12:27 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:37.705 05:12:27 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:37.705 05:12:27 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:37.705 05:12:27 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:37.705 05:12:27 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:37.705 05:12:27 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:37.705 05:12:27 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:37.705 05:12:27 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:37.705 05:12:27 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:37.705 05:12:27 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:37.705 05:12:27 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:37.705 05:12:27 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:37.705 05:12:27 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:37.705 05:12:27 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:37.705 05:12:27 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:37.705 05:12:27 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:37.705 05:12:27 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:37.705 05:12:27 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:37.705 05:12:27 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:37.705 05:12:27 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:37.705 05:12:27 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:37.705 05:12:27 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:37.705 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:37.705 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:13:37.705 00:13:37.705 --- 10.0.0.2 ping statistics --- 00:13:37.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.705 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:13:37.705 05:12:27 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:37.705 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:37.705 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:13:37.705 00:13:37.705 --- 10.0.0.3 ping statistics --- 00:13:37.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.705 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:13:37.705 05:12:27 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:37.705 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:37.705 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:13:37.705 00:13:37.705 --- 10.0.0.1 ping statistics --- 00:13:37.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.705 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:13:37.705 05:12:27 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:37.705 05:12:27 -- nvmf/common.sh@421 -- # return 0 00:13:37.705 05:12:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:37.705 05:12:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:37.705 05:12:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:37.705 05:12:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:37.705 05:12:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:37.705 05:12:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:37.705 05:12:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:37.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.705 05:12:27 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=78389 00:13:37.705 05:12:27 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:37.705 05:12:27 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:37.705 05:12:27 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 78389 00:13:37.705 05:12:27 -- common/autotest_common.sh@829 -- # '[' -z 78389 ']' 00:13:37.705 05:12:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.705 05:12:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:37.705 05:12:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.705 05:12:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:37.705 05:12:27 -- common/autotest_common.sh@10 -- # set +x 00:13:39.080 05:12:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:39.080 05:12:28 -- common/autotest_common.sh@862 -- # return 0 00:13:39.080 05:12:28 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:39.080 05:12:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.080 05:12:28 -- common/autotest_common.sh@10 -- # set +x 00:13:39.080 05:12:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.080 05:12:28 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:13:39.080 05:12:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.080 05:12:28 -- common/autotest_common.sh@10 -- # set +x 00:13:39.080 Malloc0 00:13:39.080 05:12:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.080 05:12:28 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:39.080 05:12:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.080 05:12:28 -- common/autotest_common.sh@10 -- # set +x 00:13:39.080 05:12:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.080 05:12:28 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:39.080 05:12:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.080 05:12:28 -- common/autotest_common.sh@10 -- # set +x 00:13:39.080 05:12:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.080 05:12:28 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:39.080 05:12:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.080 05:12:28 -- common/autotest_common.sh@10 -- # set +x 00:13:39.080 05:12:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.080 05:12:28 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:13:39.080 05:12:28 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:13:39.339 Shutting down the fuzz application 00:13:39.339 05:12:28 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:13:39.599 Shutting down the fuzz application 00:13:39.599 05:12:29 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:39.599 05:12:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.599 05:12:29 -- common/autotest_common.sh@10 -- # set +x 00:13:39.599 05:12:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.599 05:12:29 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:39.599 05:12:29 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:13:39.599 05:12:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:39.599 05:12:29 -- nvmf/common.sh@116 -- # sync 00:13:39.599 05:12:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:39.599 05:12:29 -- nvmf/common.sh@119 -- # set +e 00:13:39.599 05:12:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:39.599 05:12:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:39.599 rmmod nvme_tcp 00:13:39.599 rmmod nvme_fabrics 00:13:39.599 rmmod nvme_keyring 00:13:39.599 05:12:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:39.599 05:12:29 -- nvmf/common.sh@123 -- # set -e 00:13:39.599 05:12:29 -- nvmf/common.sh@124 -- # return 0 00:13:39.599 05:12:29 -- nvmf/common.sh@477 -- # '[' -n 78389 ']' 00:13:39.599 05:12:29 -- nvmf/common.sh@478 -- # killprocess 78389 00:13:39.599 05:12:29 -- common/autotest_common.sh@936 -- # '[' -z 78389 ']' 00:13:39.599 05:12:29 -- common/autotest_common.sh@940 -- # kill -0 78389 00:13:39.599 05:12:29 -- common/autotest_common.sh@941 -- # uname 00:13:39.599 05:12:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:39.599 05:12:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78389 00:13:39.857 05:12:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:39.857 killing process with pid 78389 00:13:39.857 05:12:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:39.857 05:12:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78389' 00:13:39.857 05:12:29 -- common/autotest_common.sh@955 -- # kill 78389 00:13:39.857 05:12:29 -- common/autotest_common.sh@960 -- # wait 78389 00:13:39.857 05:12:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:39.857 05:12:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:39.857 05:12:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:39.857 05:12:29 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:39.857 05:12:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:39.857 05:12:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:39.857 05:12:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:39.857 05:12:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:39.857 05:12:29 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:39.857 05:12:29 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:13:39.857 00:13:39.857 real 0m2.685s 00:13:39.857 user 0m2.874s 00:13:39.857 sys 0m0.604s 00:13:39.857 05:12:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:39.857 05:12:29 -- common/autotest_common.sh@10 -- # set +x 00:13:39.857 ************************************ 00:13:39.857 END TEST nvmf_fuzz 00:13:39.857 ************************************ 00:13:39.857 05:12:29 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:13:39.857 05:12:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:39.857 05:12:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:39.857 05:12:29 -- common/autotest_common.sh@10 -- # set +x 00:13:39.857 ************************************ 00:13:39.857 START TEST nvmf_multiconnection 00:13:39.857 ************************************ 00:13:39.857 05:12:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:13:40.116 * Looking for test storage... 00:13:40.116 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:40.116 05:12:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:40.116 05:12:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:40.116 05:12:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:40.116 05:12:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:40.116 05:12:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:40.116 05:12:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:40.116 05:12:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:40.116 05:12:29 -- scripts/common.sh@335 -- # IFS=.-: 00:13:40.116 05:12:29 -- scripts/common.sh@335 -- # read -ra ver1 00:13:40.116 05:12:29 -- scripts/common.sh@336 -- # IFS=.-: 00:13:40.116 05:12:29 -- scripts/common.sh@336 -- # read -ra ver2 00:13:40.116 05:12:29 -- scripts/common.sh@337 -- # local 'op=<' 00:13:40.116 05:12:29 -- scripts/common.sh@339 -- # ver1_l=2 00:13:40.116 05:12:29 -- scripts/common.sh@340 -- # ver2_l=1 00:13:40.116 05:12:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:40.116 05:12:29 -- scripts/common.sh@343 -- # case "$op" in 00:13:40.116 05:12:29 -- scripts/common.sh@344 -- # : 1 00:13:40.116 05:12:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:40.116 05:12:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:40.116 05:12:29 -- scripts/common.sh@364 -- # decimal 1 00:13:40.116 05:12:29 -- scripts/common.sh@352 -- # local d=1 00:13:40.116 05:12:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:40.116 05:12:29 -- scripts/common.sh@354 -- # echo 1 00:13:40.116 05:12:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:40.116 05:12:29 -- scripts/common.sh@365 -- # decimal 2 00:13:40.116 05:12:29 -- scripts/common.sh@352 -- # local d=2 00:13:40.116 05:12:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:40.116 05:12:29 -- scripts/common.sh@354 -- # echo 2 00:13:40.116 05:12:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:40.116 05:12:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:40.116 05:12:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:40.116 05:12:29 -- scripts/common.sh@367 -- # return 0 00:13:40.116 05:12:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:40.116 05:12:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:40.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:40.116 --rc genhtml_branch_coverage=1 00:13:40.116 --rc genhtml_function_coverage=1 00:13:40.116 --rc genhtml_legend=1 00:13:40.116 --rc geninfo_all_blocks=1 00:13:40.116 --rc geninfo_unexecuted_blocks=1 00:13:40.116 00:13:40.116 ' 00:13:40.116 05:12:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:40.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:40.116 --rc genhtml_branch_coverage=1 00:13:40.116 --rc genhtml_function_coverage=1 00:13:40.116 --rc genhtml_legend=1 00:13:40.116 --rc geninfo_all_blocks=1 00:13:40.116 --rc geninfo_unexecuted_blocks=1 00:13:40.116 00:13:40.116 ' 00:13:40.116 05:12:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:40.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:40.116 --rc genhtml_branch_coverage=1 00:13:40.116 --rc genhtml_function_coverage=1 00:13:40.116 --rc genhtml_legend=1 00:13:40.116 --rc geninfo_all_blocks=1 00:13:40.116 --rc geninfo_unexecuted_blocks=1 00:13:40.116 00:13:40.116 ' 00:13:40.116 05:12:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:40.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:40.116 --rc genhtml_branch_coverage=1 00:13:40.116 --rc genhtml_function_coverage=1 00:13:40.116 --rc genhtml_legend=1 00:13:40.116 --rc geninfo_all_blocks=1 00:13:40.116 --rc geninfo_unexecuted_blocks=1 00:13:40.116 00:13:40.116 ' 00:13:40.116 05:12:29 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:40.116 05:12:29 -- nvmf/common.sh@7 -- # uname -s 00:13:40.116 05:12:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:40.116 05:12:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:40.116 05:12:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:40.116 05:12:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:40.116 05:12:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:40.116 05:12:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:40.116 05:12:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:40.116 05:12:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:40.117 05:12:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:40.117 05:12:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:40.117 05:12:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:13:40.117 05:12:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:13:40.117 05:12:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:40.117 05:12:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:40.117 05:12:29 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:40.117 05:12:29 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:40.117 05:12:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:40.117 05:12:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:40.117 05:12:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:40.117 05:12:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.117 05:12:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.117 05:12:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.117 05:12:29 -- paths/export.sh@5 -- # export PATH 00:13:40.117 05:12:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.117 05:12:29 -- nvmf/common.sh@46 -- # : 0 00:13:40.117 05:12:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:40.117 05:12:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:40.117 05:12:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:40.117 05:12:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:40.117 05:12:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:40.117 05:12:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:40.117 05:12:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:40.117 05:12:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:40.117 05:12:29 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:40.117 05:12:29 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:40.117 05:12:29 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:13:40.117 05:12:29 -- target/multiconnection.sh@16 -- # nvmftestinit 00:13:40.117 05:12:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:40.117 05:12:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:40.117 05:12:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:40.117 05:12:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:40.117 05:12:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:40.117 05:12:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:40.117 05:12:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:40.117 05:12:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:40.117 05:12:29 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:40.117 05:12:29 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:40.117 05:12:29 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:40.117 05:12:29 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:40.117 05:12:29 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:40.117 05:12:29 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:40.117 05:12:29 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:40.117 05:12:29 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:40.117 05:12:29 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:40.117 05:12:29 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:40.117 05:12:29 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:40.117 05:12:29 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:40.117 05:12:29 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:40.117 05:12:29 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:40.117 05:12:29 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:40.117 05:12:29 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:40.117 05:12:29 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:40.117 05:12:29 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:40.117 05:12:29 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:40.117 05:12:29 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:40.117 Cannot find device "nvmf_tgt_br" 00:13:40.117 05:12:29 -- nvmf/common.sh@154 -- # true 00:13:40.117 05:12:29 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:40.117 Cannot find device "nvmf_tgt_br2" 00:13:40.117 05:12:29 -- nvmf/common.sh@155 -- # true 00:13:40.117 05:12:29 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:40.117 05:12:29 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:40.117 Cannot find device "nvmf_tgt_br" 00:13:40.117 05:12:29 -- nvmf/common.sh@157 -- # true 00:13:40.117 05:12:29 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:40.117 Cannot find device "nvmf_tgt_br2" 00:13:40.117 05:12:29 -- nvmf/common.sh@158 -- # true 00:13:40.117 05:12:29 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:40.379 05:12:29 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:40.379 05:12:29 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:40.379 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:40.379 05:12:29 -- nvmf/common.sh@161 -- # true 00:13:40.379 05:12:29 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:40.379 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:40.379 05:12:29 -- nvmf/common.sh@162 -- # true 00:13:40.379 05:12:29 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:40.379 05:12:29 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:40.379 05:12:29 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:40.379 05:12:29 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:40.379 05:12:30 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:40.379 05:12:30 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:40.379 05:12:30 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:40.379 05:12:30 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:40.379 05:12:30 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:40.379 05:12:30 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:40.379 05:12:30 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:40.379 05:12:30 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:40.379 05:12:30 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:40.379 05:12:30 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:40.379 05:12:30 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:40.379 05:12:30 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:40.379 05:12:30 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:40.379 05:12:30 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:40.379 05:12:30 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:40.379 05:12:30 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:40.379 05:12:30 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:40.637 05:12:30 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:40.637 05:12:30 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:40.637 05:12:30 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:40.637 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:40.637 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:13:40.637 00:13:40.637 --- 10.0.0.2 ping statistics --- 00:13:40.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.637 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:13:40.637 05:12:30 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:40.637 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:40.637 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:13:40.637 00:13:40.637 --- 10.0.0.3 ping statistics --- 00:13:40.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.637 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:13:40.637 05:12:30 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:40.637 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:40.637 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:13:40.637 00:13:40.637 --- 10.0.0.1 ping statistics --- 00:13:40.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.637 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:13:40.637 05:12:30 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:40.637 05:12:30 -- nvmf/common.sh@421 -- # return 0 00:13:40.637 05:12:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:40.637 05:12:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:40.637 05:12:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:40.637 05:12:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:40.637 05:12:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:40.637 05:12:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:40.637 05:12:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:40.637 05:12:30 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:13:40.637 05:12:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:40.637 05:12:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:40.637 05:12:30 -- common/autotest_common.sh@10 -- # set +x 00:13:40.637 05:12:30 -- nvmf/common.sh@469 -- # nvmfpid=78583 00:13:40.637 05:12:30 -- nvmf/common.sh@470 -- # waitforlisten 78583 00:13:40.637 05:12:30 -- common/autotest_common.sh@829 -- # '[' -z 78583 ']' 00:13:40.637 05:12:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:40.637 05:12:30 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:40.637 05:12:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:40.637 05:12:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:40.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:40.637 05:12:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:40.637 05:12:30 -- common/autotest_common.sh@10 -- # set +x 00:13:40.637 [2024-12-08 05:12:30.276859] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:40.637 [2024-12-08 05:12:30.276979] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:40.637 [2024-12-08 05:12:30.419073] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:40.895 [2024-12-08 05:12:30.462427] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:40.895 [2024-12-08 05:12:30.462643] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:40.895 [2024-12-08 05:12:30.462691] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:40.895 [2024-12-08 05:12:30.462710] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:40.895 [2024-12-08 05:12:30.462804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:40.895 [2024-12-08 05:12:30.463439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:40.895 [2024-12-08 05:12:30.463545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:40.895 [2024-12-08 05:12:30.463560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.895 05:12:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:40.895 05:12:30 -- common/autotest_common.sh@862 -- # return 0 00:13:40.895 05:12:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:40.895 05:12:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:40.895 05:12:30 -- common/autotest_common.sh@10 -- # set +x 00:13:40.895 05:12:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:40.895 05:12:30 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:40.895 05:12:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.895 05:12:30 -- common/autotest_common.sh@10 -- # set +x 00:13:40.895 [2024-12-08 05:12:30.590562] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:40.895 05:12:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.895 05:12:30 -- target/multiconnection.sh@21 -- # seq 1 11 00:13:40.895 05:12:30 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:40.895 05:12:30 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:40.896 05:12:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.896 05:12:30 -- common/autotest_common.sh@10 -- # set +x 00:13:40.896 Malloc1 00:13:40.896 05:12:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.896 05:12:30 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:13:40.896 05:12:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.896 05:12:30 -- common/autotest_common.sh@10 -- # set +x 00:13:40.896 05:12:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.896 05:12:30 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:40.896 05:12:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.896 05:12:30 -- common/autotest_common.sh@10 -- # set +x 00:13:40.896 05:12:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.896 05:12:30 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:40.896 05:12:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.896 05:12:30 -- common/autotest_common.sh@10 -- # set +x 00:13:40.896 [2024-12-08 05:12:30.653927] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:40.896 05:12:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.896 05:12:30 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:40.896 05:12:30 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:13:40.896 05:12:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.896 05:12:30 -- common/autotest_common.sh@10 -- # set +x 00:13:40.896 Malloc2 00:13:40.896 05:12:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.896 05:12:30 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:40.896 05:12:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.896 05:12:30 -- common/autotest_common.sh@10 -- # set +x 00:13:41.153 05:12:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.153 05:12:30 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:13:41.153 05:12:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.153 05:12:30 -- common/autotest_common.sh@10 -- # set +x 00:13:41.153 05:12:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.153 05:12:30 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:41.153 05:12:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.154 05:12:30 -- common/autotest_common.sh@10 -- # set +x 00:13:41.154 05:12:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.154 05:12:30 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:41.154 05:12:30 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:13:41.154 05:12:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.154 05:12:30 -- common/autotest_common.sh@10 -- # set +x 00:13:41.154 Malloc3 00:13:41.154 05:12:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.154 05:12:30 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:13:41.154 05:12:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.154 05:12:30 -- common/autotest_common.sh@10 -- # set +x 00:13:41.154 05:12:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.154 05:12:30 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:13:41.154 05:12:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.154 05:12:30 -- common/autotest_common.sh@10 -- # set +x 00:13:41.154 05:12:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.154 05:12:30 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:13:41.154 05:12:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.154 05:12:30 -- common/autotest_common.sh@10 -- # set +x 00:13:41.154 05:12:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.154 05:12:30 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:41.154 05:12:30 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:13:41.154 05:12:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.154 05:12:30 -- common/autotest_common.sh@10 -- # set +x 00:13:41.154 Malloc4 00:13:41.154 05:12:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.154 05:12:30 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:13:41.154 05:12:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.154 05:12:30 -- common/autotest_common.sh@10 -- # set +x 00:13:41.154 05:12:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.154 05:12:30 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:13:41.154 05:12:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.154 05:12:30 -- common/autotest_common.sh@10 -- # set +x 00:13:41.154 05:12:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.154 05:12:30 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:13:41.154 05:12:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.154 05:12:30 -- common/autotest_common.sh@10 -- # set +x 00:13:41.154 05:12:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.154 05:12:30 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:41.154 05:12:30 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:13:41.154 05:12:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.154 05:12:30 -- common/autotest_common.sh@10 -- # set +x 00:13:41.154 Malloc5 00:13:41.154 05:12:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.154 05:12:30 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:13:41.154 05:12:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.154 05:12:30 -- common/autotest_common.sh@10 -- # set +x 00:13:41.154 05:12:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.154 05:12:30 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:13:41.154 05:12:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.154 05:12:30 -- common/autotest_common.sh@10 -- # set +x 00:13:41.154 05:12:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.154 05:12:30 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:13:41.154 05:12:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.154 05:12:30 -- common/autotest_common.sh@10 -- # set +x 00:13:41.154 05:12:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.154 05:12:30 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:41.154 05:12:30 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:13:41.154 05:12:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.154 05:12:30 -- common/autotest_common.sh@10 -- # set +x 00:13:41.154 Malloc6 00:13:41.154 05:12:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.154 05:12:30 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:13:41.154 05:12:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.154 05:12:30 -- common/autotest_common.sh@10 -- # set +x 00:13:41.154 05:12:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.154 05:12:30 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:13:41.154 05:12:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.154 05:12:30 -- common/autotest_common.sh@10 -- # set +x 00:13:41.154 05:12:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.154 05:12:30 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:13:41.154 05:12:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.154 05:12:30 -- common/autotest_common.sh@10 -- # set +x 00:13:41.154 05:12:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.154 05:12:30 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:41.154 05:12:30 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:13:41.154 05:12:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.154 05:12:30 -- common/autotest_common.sh@10 -- # set +x 00:13:41.154 Malloc7 00:13:41.154 05:12:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.154 05:12:30 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:13:41.154 05:12:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.154 05:12:30 -- common/autotest_common.sh@10 -- # set +x 00:13:41.154 05:12:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.154 05:12:30 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:13:41.154 05:12:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.154 05:12:30 -- common/autotest_common.sh@10 -- # set +x 00:13:41.154 05:12:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.154 05:12:30 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:13:41.154 05:12:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.154 05:12:30 -- common/autotest_common.sh@10 -- # set +x 00:13:41.154 05:12:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.154 05:12:30 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:41.154 05:12:30 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:13:41.154 05:12:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.154 05:12:30 -- common/autotest_common.sh@10 -- # set +x 00:13:41.444 Malloc8 00:13:41.444 05:12:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.444 05:12:30 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:13:41.444 05:12:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.444 05:12:30 -- common/autotest_common.sh@10 -- # set +x 00:13:41.444 05:12:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.444 05:12:30 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:13:41.444 05:12:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.444 05:12:30 -- common/autotest_common.sh@10 -- # set +x 00:13:41.444 05:12:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.444 05:12:30 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:13:41.444 05:12:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.444 05:12:30 -- common/autotest_common.sh@10 -- # set +x 00:13:41.444 05:12:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.444 05:12:30 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:41.444 05:12:30 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:13:41.444 05:12:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.444 05:12:30 -- common/autotest_common.sh@10 -- # set +x 00:13:41.444 Malloc9 00:13:41.444 05:12:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.444 05:12:30 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:13:41.444 05:12:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.444 05:12:30 -- common/autotest_common.sh@10 -- # set +x 00:13:41.444 05:12:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.444 05:12:30 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:13:41.444 05:12:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.444 05:12:30 -- common/autotest_common.sh@10 -- # set +x 00:13:41.444 05:12:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.444 05:12:30 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:13:41.444 05:12:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.444 05:12:30 -- common/autotest_common.sh@10 -- # set +x 00:13:41.444 05:12:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.444 05:12:31 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:41.444 05:12:31 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:13:41.444 05:12:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.444 05:12:31 -- common/autotest_common.sh@10 -- # set +x 00:13:41.444 Malloc10 00:13:41.444 05:12:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.444 05:12:31 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:13:41.444 05:12:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.444 05:12:31 -- common/autotest_common.sh@10 -- # set +x 00:13:41.444 05:12:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.444 05:12:31 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:13:41.444 05:12:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.444 05:12:31 -- common/autotest_common.sh@10 -- # set +x 00:13:41.444 05:12:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.444 05:12:31 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:13:41.444 05:12:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.444 05:12:31 -- common/autotest_common.sh@10 -- # set +x 00:13:41.444 05:12:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.444 05:12:31 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:41.444 05:12:31 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:13:41.444 05:12:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.444 05:12:31 -- common/autotest_common.sh@10 -- # set +x 00:13:41.444 Malloc11 00:13:41.444 05:12:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.444 05:12:31 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:13:41.444 05:12:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.444 05:12:31 -- common/autotest_common.sh@10 -- # set +x 00:13:41.444 05:12:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.444 05:12:31 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:13:41.444 05:12:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.444 05:12:31 -- common/autotest_common.sh@10 -- # set +x 00:13:41.444 05:12:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.444 05:12:31 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:13:41.444 05:12:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.445 05:12:31 -- common/autotest_common.sh@10 -- # set +x 00:13:41.445 05:12:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.445 05:12:31 -- target/multiconnection.sh@28 -- # seq 1 11 00:13:41.445 05:12:31 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:41.445 05:12:31 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 --hostid=bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:41.445 05:12:31 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:13:41.445 05:12:31 -- common/autotest_common.sh@1187 -- # local i=0 00:13:41.445 05:12:31 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:41.445 05:12:31 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:41.445 05:12:31 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:43.977 05:12:33 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:43.978 05:12:33 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:43.978 05:12:33 -- common/autotest_common.sh@1196 -- # grep -c SPDK1 00:13:43.978 05:12:33 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:43.978 05:12:33 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:43.978 05:12:33 -- common/autotest_common.sh@1197 -- # return 0 00:13:43.978 05:12:33 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:43.978 05:12:33 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 --hostid=bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:13:43.978 05:12:33 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:13:43.978 05:12:33 -- common/autotest_common.sh@1187 -- # local i=0 00:13:43.978 05:12:33 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:43.978 05:12:33 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:43.978 05:12:33 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:45.880 05:12:35 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:45.880 05:12:35 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:45.880 05:12:35 -- common/autotest_common.sh@1196 -- # grep -c SPDK2 00:13:45.880 05:12:35 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:45.880 05:12:35 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:45.880 05:12:35 -- common/autotest_common.sh@1197 -- # return 0 00:13:45.880 05:12:35 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:45.880 05:12:35 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 --hostid=bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:13:45.880 05:12:35 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:13:45.880 05:12:35 -- common/autotest_common.sh@1187 -- # local i=0 00:13:45.880 05:12:35 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:45.880 05:12:35 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:45.880 05:12:35 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:47.876 05:12:37 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:47.876 05:12:37 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:47.876 05:12:37 -- common/autotest_common.sh@1196 -- # grep -c SPDK3 00:13:47.876 05:12:37 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:47.876 05:12:37 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:47.876 05:12:37 -- common/autotest_common.sh@1197 -- # return 0 00:13:47.876 05:12:37 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:47.876 05:12:37 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 --hostid=bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:13:47.876 05:12:37 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:13:47.876 05:12:37 -- common/autotest_common.sh@1187 -- # local i=0 00:13:47.876 05:12:37 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:47.876 05:12:37 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:47.876 05:12:37 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:50.405 05:12:39 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:50.405 05:12:39 -- common/autotest_common.sh@1196 -- # grep -c SPDK4 00:13:50.405 05:12:39 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:50.405 05:12:39 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:50.405 05:12:39 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:50.405 05:12:39 -- common/autotest_common.sh@1197 -- # return 0 00:13:50.405 05:12:39 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:50.405 05:12:39 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 --hostid=bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:13:50.405 05:12:39 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:13:50.405 05:12:39 -- common/autotest_common.sh@1187 -- # local i=0 00:13:50.405 05:12:39 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:50.405 05:12:39 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:50.405 05:12:39 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:52.308 05:12:41 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:52.308 05:12:41 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:52.308 05:12:41 -- common/autotest_common.sh@1196 -- # grep -c SPDK5 00:13:52.308 05:12:41 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:52.308 05:12:41 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:52.308 05:12:41 -- common/autotest_common.sh@1197 -- # return 0 00:13:52.308 05:12:41 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:52.308 05:12:41 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 --hostid=bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:13:52.308 05:12:41 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:13:52.308 05:12:41 -- common/autotest_common.sh@1187 -- # local i=0 00:13:52.308 05:12:41 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:52.308 05:12:41 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:52.308 05:12:41 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:54.295 05:12:43 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:54.295 05:12:43 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:54.295 05:12:43 -- common/autotest_common.sh@1196 -- # grep -c SPDK6 00:13:54.295 05:12:43 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:54.295 05:12:43 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:54.295 05:12:43 -- common/autotest_common.sh@1197 -- # return 0 00:13:54.295 05:12:43 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:54.295 05:12:43 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 --hostid=bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:13:54.552 05:12:44 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:13:54.552 05:12:44 -- common/autotest_common.sh@1187 -- # local i=0 00:13:54.552 05:12:44 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:54.552 05:12:44 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:54.552 05:12:44 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:56.455 05:12:46 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:56.455 05:12:46 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:56.455 05:12:46 -- common/autotest_common.sh@1196 -- # grep -c SPDK7 00:13:56.455 05:12:46 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:56.455 05:12:46 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:56.455 05:12:46 -- common/autotest_common.sh@1197 -- # return 0 00:13:56.455 05:12:46 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:56.455 05:12:46 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 --hostid=bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:13:56.712 05:12:46 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:13:56.712 05:12:46 -- common/autotest_common.sh@1187 -- # local i=0 00:13:56.712 05:12:46 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:56.712 05:12:46 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:56.712 05:12:46 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:58.607 05:12:48 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:58.607 05:12:48 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:58.607 05:12:48 -- common/autotest_common.sh@1196 -- # grep -c SPDK8 00:13:58.607 05:12:48 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:58.607 05:12:48 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:58.607 05:12:48 -- common/autotest_common.sh@1197 -- # return 0 00:13:58.607 05:12:48 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:58.607 05:12:48 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 --hostid=bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:13:58.866 05:12:48 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:13:58.866 05:12:48 -- common/autotest_common.sh@1187 -- # local i=0 00:13:58.866 05:12:48 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:58.866 05:12:48 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:58.866 05:12:48 -- common/autotest_common.sh@1194 -- # sleep 2 00:14:00.767 05:12:50 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:14:00.767 05:12:50 -- common/autotest_common.sh@1196 -- # grep -c SPDK9 00:14:00.767 05:12:50 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:14:00.767 05:12:50 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:14:00.767 05:12:50 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:14:00.767 05:12:50 -- common/autotest_common.sh@1197 -- # return 0 00:14:00.767 05:12:50 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:00.767 05:12:50 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 --hostid=bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:14:01.025 05:12:50 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:14:01.025 05:12:50 -- common/autotest_common.sh@1187 -- # local i=0 00:14:01.025 05:12:50 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:14:01.025 05:12:50 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:14:01.025 05:12:50 -- common/autotest_common.sh@1194 -- # sleep 2 00:14:02.940 05:12:52 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:14:02.940 05:12:52 -- common/autotest_common.sh@1196 -- # grep -c SPDK10 00:14:02.940 05:12:52 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:14:02.940 05:12:52 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:14:02.940 05:12:52 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:14:02.940 05:12:52 -- common/autotest_common.sh@1197 -- # return 0 00:14:02.940 05:12:52 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:02.940 05:12:52 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 --hostid=bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:14:03.198 05:12:52 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:14:03.198 05:12:52 -- common/autotest_common.sh@1187 -- # local i=0 00:14:03.198 05:12:52 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:14:03.198 05:12:52 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:14:03.198 05:12:52 -- common/autotest_common.sh@1194 -- # sleep 2 00:14:05.160 05:12:54 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:14:05.160 05:12:54 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:14:05.160 05:12:54 -- common/autotest_common.sh@1196 -- # grep -c SPDK11 00:14:05.160 05:12:54 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:14:05.160 05:12:54 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:14:05.160 05:12:54 -- common/autotest_common.sh@1197 -- # return 0 00:14:05.160 05:12:54 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:14:05.160 [global] 00:14:05.160 thread=1 00:14:05.160 invalidate=1 00:14:05.160 rw=read 00:14:05.160 time_based=1 00:14:05.160 runtime=10 00:14:05.160 ioengine=libaio 00:14:05.160 direct=1 00:14:05.160 bs=262144 00:14:05.160 iodepth=64 00:14:05.160 norandommap=1 00:14:05.160 numjobs=1 00:14:05.160 00:14:05.160 [job0] 00:14:05.160 filename=/dev/nvme0n1 00:14:05.160 [job1] 00:14:05.160 filename=/dev/nvme10n1 00:14:05.160 [job2] 00:14:05.160 filename=/dev/nvme1n1 00:14:05.160 [job3] 00:14:05.160 filename=/dev/nvme2n1 00:14:05.160 [job4] 00:14:05.160 filename=/dev/nvme3n1 00:14:05.160 [job5] 00:14:05.160 filename=/dev/nvme4n1 00:14:05.160 [job6] 00:14:05.160 filename=/dev/nvme5n1 00:14:05.160 [job7] 00:14:05.160 filename=/dev/nvme6n1 00:14:05.160 [job8] 00:14:05.160 filename=/dev/nvme7n1 00:14:05.160 [job9] 00:14:05.160 filename=/dev/nvme8n1 00:14:05.160 [job10] 00:14:05.160 filename=/dev/nvme9n1 00:14:05.417 Could not set queue depth (nvme0n1) 00:14:05.417 Could not set queue depth (nvme10n1) 00:14:05.417 Could not set queue depth (nvme1n1) 00:14:05.417 Could not set queue depth (nvme2n1) 00:14:05.417 Could not set queue depth (nvme3n1) 00:14:05.417 Could not set queue depth (nvme4n1) 00:14:05.417 Could not set queue depth (nvme5n1) 00:14:05.417 Could not set queue depth (nvme6n1) 00:14:05.417 Could not set queue depth (nvme7n1) 00:14:05.417 Could not set queue depth (nvme8n1) 00:14:05.417 Could not set queue depth (nvme9n1) 00:14:05.417 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:05.417 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:05.417 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:05.417 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:05.417 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:05.417 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:05.417 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:05.417 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:05.417 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:05.417 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:05.417 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:05.417 fio-3.35 00:14:05.417 Starting 11 threads 00:14:17.669 00:14:17.669 job0: (groupid=0, jobs=1): err= 0: pid=79035: Sun Dec 8 05:13:05 2024 00:14:17.669 read: IOPS=331, BW=83.0MiB/s (87.0MB/s)(841MiB/10136msec) 00:14:17.669 slat (usec): min=17, max=186189, avg=3001.21, stdev=11242.93 00:14:17.669 clat (msec): min=22, max=392, avg=189.57, stdev=24.46 00:14:17.669 lat (msec): min=33, max=392, avg=192.57, stdev=26.45 00:14:17.669 clat percentiles (msec): 00:14:17.669 | 1.00th=[ 138], 5.00th=[ 167], 10.00th=[ 171], 20.00th=[ 174], 00:14:17.669 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 188], 60.00th=[ 192], 00:14:17.669 | 70.00th=[ 197], 80.00th=[ 203], 90.00th=[ 218], 95.00th=[ 226], 00:14:17.669 | 99.00th=[ 271], 99.50th=[ 292], 99.90th=[ 326], 99.95th=[ 393], 00:14:17.669 | 99.99th=[ 393] 00:14:17.669 bw ( KiB/s): min=67719, max=94208, per=4.26%, avg=84529.15, stdev=6997.93, samples=20 00:14:17.669 iops : min= 264, max= 368, avg=329.85, stdev=27.36, samples=20 00:14:17.669 lat (msec) : 50=0.51%, 250=97.83%, 500=1.66% 00:14:17.669 cpu : usr=0.22%, sys=1.29%, ctx=817, majf=0, minf=4097 00:14:17.669 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:14:17.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:17.669 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:17.669 issued rwts: total=3364,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:17.669 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:17.669 job1: (groupid=0, jobs=1): err= 0: pid=79036: Sun Dec 8 05:13:05 2024 00:14:17.669 read: IOPS=333, BW=83.4MiB/s (87.5MB/s)(846MiB/10138msec) 00:14:17.669 slat (usec): min=18, max=89592, avg=2953.13, stdev=8521.60 00:14:17.669 clat (msec): min=15, max=318, avg=188.68, stdev=24.34 00:14:17.669 lat (msec): min=15, max=341, avg=191.63, stdev=25.62 00:14:17.669 clat percentiles (msec): 00:14:17.669 | 1.00th=[ 124], 5.00th=[ 167], 10.00th=[ 169], 20.00th=[ 174], 00:14:17.669 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 188], 60.00th=[ 192], 00:14:17.669 | 70.00th=[ 197], 80.00th=[ 203], 90.00th=[ 215], 95.00th=[ 224], 00:14:17.669 | 99.00th=[ 259], 99.50th=[ 300], 99.90th=[ 317], 99.95th=[ 317], 00:14:17.669 | 99.99th=[ 317] 00:14:17.669 bw ( KiB/s): min=68745, max=95232, per=4.28%, avg=84963.80, stdev=6686.82, samples=20 00:14:17.669 iops : min= 268, max= 372, avg=331.80, stdev=26.26, samples=20 00:14:17.669 lat (msec) : 20=0.03%, 100=0.92%, 250=97.87%, 500=1.18% 00:14:17.669 cpu : usr=0.22%, sys=1.25%, ctx=854, majf=0, minf=4097 00:14:17.669 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.1% 00:14:17.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:17.669 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:17.669 issued rwts: total=3382,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:17.669 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:17.669 job2: (groupid=0, jobs=1): err= 0: pid=79037: Sun Dec 8 05:13:05 2024 00:14:17.669 read: IOPS=333, BW=83.3MiB/s (87.4MB/s)(845MiB/10143msec) 00:14:17.669 slat (usec): min=17, max=108132, avg=2977.27, stdev=10604.99 00:14:17.669 clat (msec): min=38, max=310, avg=188.69, stdev=23.40 00:14:17.669 lat (msec): min=41, max=347, avg=191.66, stdev=25.40 00:14:17.669 clat percentiles (msec): 00:14:17.669 | 1.00th=[ 146], 5.00th=[ 167], 10.00th=[ 171], 20.00th=[ 176], 00:14:17.669 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 192], 00:14:17.669 | 70.00th=[ 197], 80.00th=[ 203], 90.00th=[ 215], 95.00th=[ 224], 00:14:17.669 | 99.00th=[ 268], 99.50th=[ 275], 99.90th=[ 296], 99.95th=[ 296], 00:14:17.669 | 99.99th=[ 309] 00:14:17.669 bw ( KiB/s): min=69771, max=96256, per=4.28%, avg=84893.90, stdev=8531.63, samples=20 00:14:17.669 iops : min= 272, max= 376, avg=331.50, stdev=33.45, samples=20 00:14:17.669 lat (msec) : 50=0.74%, 100=0.12%, 250=97.43%, 500=1.72% 00:14:17.669 cpu : usr=0.18%, sys=1.38%, ctx=851, majf=0, minf=4097 00:14:17.669 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.1% 00:14:17.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:17.669 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:17.669 issued rwts: total=3380,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:17.669 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:17.669 job3: (groupid=0, jobs=1): err= 0: pid=79038: Sun Dec 8 05:13:05 2024 00:14:17.669 read: IOPS=1107, BW=277MiB/s (290MB/s)(2772MiB/10013msec) 00:14:17.669 slat (usec): min=14, max=14839, avg=864.17, stdev=1963.65 00:14:17.669 clat (msec): min=10, max=200, avg=56.85, stdev=16.13 00:14:17.669 lat (msec): min=10, max=207, avg=57.71, stdev=16.37 00:14:17.669 clat percentiles (msec): 00:14:17.669 | 1.00th=[ 27], 5.00th=[ 29], 10.00th=[ 31], 20.00th=[ 36], 00:14:17.669 | 30.00th=[ 55], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 63], 00:14:17.669 | 70.00th=[ 65], 80.00th=[ 68], 90.00th=[ 73], 95.00th=[ 79], 00:14:17.669 | 99.00th=[ 90], 99.50th=[ 96], 99.90th=[ 197], 99.95th=[ 199], 00:14:17.669 | 99.99th=[ 201] 00:14:17.670 bw ( KiB/s): min=212055, max=515584, per=14.23%, avg=282400.80, stdev=80938.00, samples=20 00:14:17.670 iops : min= 828, max= 2014, avg=1102.85, stdev=316.12, samples=20 00:14:17.670 lat (msec) : 20=0.32%, 50=22.63%, 100=76.78%, 250=0.26% 00:14:17.670 cpu : usr=0.64%, sys=4.02%, ctx=2548, majf=0, minf=4097 00:14:17.670 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:14:17.670 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:17.670 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:17.670 issued rwts: total=11087,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:17.670 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:17.670 job4: (groupid=0, jobs=1): err= 0: pid=79039: Sun Dec 8 05:13:05 2024 00:14:17.670 read: IOPS=334, BW=83.6MiB/s (87.6MB/s)(847MiB/10137msec) 00:14:17.670 slat (usec): min=16, max=114960, avg=2946.61, stdev=10344.76 00:14:17.670 clat (msec): min=48, max=323, avg=188.28, stdev=25.68 00:14:17.670 lat (msec): min=48, max=323, avg=191.23, stdev=27.58 00:14:17.670 clat percentiles (msec): 00:14:17.670 | 1.00th=[ 75], 5.00th=[ 167], 10.00th=[ 171], 20.00th=[ 174], 00:14:17.670 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 188], 60.00th=[ 192], 00:14:17.670 | 70.00th=[ 197], 80.00th=[ 203], 90.00th=[ 213], 95.00th=[ 224], 00:14:17.670 | 99.00th=[ 279], 99.50th=[ 305], 99.90th=[ 317], 99.95th=[ 326], 00:14:17.670 | 99.99th=[ 326] 00:14:17.670 bw ( KiB/s): min=71823, max=96448, per=4.29%, avg=85127.50, stdev=7039.92, samples=20 00:14:17.670 iops : min= 280, max= 376, avg=332.40, stdev=27.54, samples=20 00:14:17.670 lat (msec) : 50=0.15%, 100=1.68%, 250=96.55%, 500=1.62% 00:14:17.670 cpu : usr=0.17%, sys=1.55%, ctx=816, majf=0, minf=4097 00:14:17.670 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.1% 00:14:17.670 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:17.670 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:17.670 issued rwts: total=3388,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:17.670 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:17.670 job5: (groupid=0, jobs=1): err= 0: pid=79040: Sun Dec 8 05:13:05 2024 00:14:17.670 read: IOPS=332, BW=83.2MiB/s (87.3MB/s)(843MiB/10134msec) 00:14:17.670 slat (usec): min=17, max=110126, avg=2981.30, stdev=9401.77 00:14:17.670 clat (msec): min=61, max=333, avg=189.01, stdev=21.67 00:14:17.670 lat (msec): min=68, max=333, avg=191.99, stdev=23.28 00:14:17.670 clat percentiles (msec): 00:14:17.670 | 1.00th=[ 86], 5.00th=[ 167], 10.00th=[ 169], 20.00th=[ 176], 00:14:17.670 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 192], 00:14:17.670 | 70.00th=[ 197], 80.00th=[ 203], 90.00th=[ 215], 95.00th=[ 224], 00:14:17.670 | 99.00th=[ 245], 99.50th=[ 259], 99.90th=[ 288], 99.95th=[ 300], 00:14:17.670 | 99.99th=[ 334] 00:14:17.670 bw ( KiB/s): min=67206, max=95744, per=4.27%, avg=84776.10, stdev=8367.74, samples=20 00:14:17.670 iops : min= 262, max= 374, avg=330.80, stdev=32.72, samples=20 00:14:17.670 lat (msec) : 100=1.01%, 250=98.28%, 500=0.71% 00:14:17.670 cpu : usr=0.18%, sys=1.30%, ctx=833, majf=0, minf=4097 00:14:17.670 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.1% 00:14:17.670 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:17.670 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:17.670 issued rwts: total=3373,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:17.670 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:17.670 job6: (groupid=0, jobs=1): err= 0: pid=79041: Sun Dec 8 05:13:05 2024 00:14:17.670 read: IOPS=1211, BW=303MiB/s (318MB/s)(3036MiB/10019msec) 00:14:17.670 slat (usec): min=17, max=14730, avg=816.77, stdev=1802.11 00:14:17.670 clat (msec): min=3, max=117, avg=51.92, stdev=15.85 00:14:17.670 lat (msec): min=3, max=117, avg=52.74, stdev=16.07 00:14:17.670 clat percentiles (msec): 00:14:17.670 | 1.00th=[ 27], 5.00th=[ 30], 10.00th=[ 31], 20.00th=[ 33], 00:14:17.670 | 30.00th=[ 36], 40.00th=[ 53], 50.00th=[ 57], 60.00th=[ 60], 00:14:17.670 | 70.00th=[ 63], 80.00th=[ 66], 90.00th=[ 70], 95.00th=[ 74], 00:14:17.670 | 99.00th=[ 85], 99.50th=[ 92], 99.90th=[ 107], 99.95th=[ 110], 00:14:17.670 | 99.99th=[ 118] 00:14:17.670 bw ( KiB/s): min=227385, max=506368, per=15.58%, avg=309148.50, stdev=97046.74, samples=20 00:14:17.670 iops : min= 888, max= 1978, avg=1207.55, stdev=379.11, samples=20 00:14:17.670 lat (msec) : 4=0.01%, 10=0.03%, 20=0.16%, 50=36.38%, 100=63.09% 00:14:17.670 lat (msec) : 250=0.32% 00:14:17.670 cpu : usr=0.52%, sys=4.31%, ctx=2739, majf=0, minf=4097 00:14:17.670 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:14:17.670 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:17.670 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:17.670 issued rwts: total=12143,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:17.670 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:17.670 job7: (groupid=0, jobs=1): err= 0: pid=79044: Sun Dec 8 05:13:05 2024 00:14:17.670 read: IOPS=1024, BW=256MiB/s (268MB/s)(2568MiB/10028msec) 00:14:17.670 slat (usec): min=17, max=17620, avg=964.13, stdev=2048.90 00:14:17.670 clat (msec): min=9, max=117, avg=61.40, stdev= 9.49 00:14:17.670 lat (msec): min=9, max=120, avg=62.36, stdev= 9.54 00:14:17.670 clat percentiles (msec): 00:14:17.670 | 1.00th=[ 33], 5.00th=[ 50], 10.00th=[ 53], 20.00th=[ 56], 00:14:17.670 | 30.00th=[ 58], 40.00th=[ 60], 50.00th=[ 62], 60.00th=[ 63], 00:14:17.670 | 70.00th=[ 65], 80.00th=[ 68], 90.00th=[ 72], 95.00th=[ 77], 00:14:17.670 | 99.00th=[ 85], 99.50th=[ 93], 99.90th=[ 116], 99.95th=[ 117], 00:14:17.670 | 99.99th=[ 118] 00:14:17.670 bw ( KiB/s): min=229888, max=278528, per=13.17%, avg=261323.05, stdev=12918.57, samples=20 00:14:17.670 iops : min= 898, max= 1088, avg=1020.65, stdev=50.53, samples=20 00:14:17.670 lat (msec) : 10=0.01%, 20=0.60%, 50=5.38%, 100=93.61%, 250=0.39% 00:14:17.670 cpu : usr=0.55%, sys=3.85%, ctx=2357, majf=0, minf=4097 00:14:17.670 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:14:17.670 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:17.670 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:17.670 issued rwts: total=10271,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:17.670 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:17.670 job8: (groupid=0, jobs=1): err= 0: pid=79047: Sun Dec 8 05:13:05 2024 00:14:17.670 read: IOPS=842, BW=211MiB/s (221MB/s)(2136MiB/10136msec) 00:14:17.670 slat (usec): min=16, max=162252, avg=1159.22, stdev=4872.56 00:14:17.670 clat (msec): min=20, max=364, avg=74.67, stdev=40.49 00:14:17.670 lat (msec): min=20, max=364, avg=75.82, stdev=41.20 00:14:17.670 clat percentiles (msec): 00:14:17.670 | 1.00th=[ 42], 5.00th=[ 52], 10.00th=[ 55], 20.00th=[ 58], 00:14:17.670 | 30.00th=[ 60], 40.00th=[ 62], 50.00th=[ 64], 60.00th=[ 66], 00:14:17.670 | 70.00th=[ 68], 80.00th=[ 73], 90.00th=[ 85], 95.00th=[ 197], 00:14:17.670 | 99.00th=[ 228], 99.50th=[ 275], 99.90th=[ 317], 99.95th=[ 317], 00:14:17.670 | 99.99th=[ 363] 00:14:17.670 bw ( KiB/s): min=66048, max=275494, per=10.94%, avg=217130.80, stdev=72646.03, samples=20 00:14:17.670 iops : min= 258, max= 1076, avg=848.00, stdev=283.86, samples=20 00:14:17.670 lat (msec) : 50=3.83%, 100=88.53%, 250=6.94%, 500=0.70% 00:14:17.670 cpu : usr=0.41%, sys=3.16%, ctx=1902, majf=0, minf=4097 00:14:17.670 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:14:17.670 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:17.670 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:17.670 issued rwts: total=8544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:17.670 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:17.670 job9: (groupid=0, jobs=1): err= 0: pid=79048: Sun Dec 8 05:13:05 2024 00:14:17.670 read: IOPS=332, BW=83.1MiB/s (87.1MB/s)(843MiB/10142msec) 00:14:17.670 slat (usec): min=17, max=114578, avg=2911.18, stdev=9530.42 00:14:17.670 clat (msec): min=26, max=313, avg=189.38, stdev=22.36 00:14:17.670 lat (msec): min=27, max=313, avg=192.29, stdev=23.93 00:14:17.670 clat percentiles (msec): 00:14:17.670 | 1.00th=[ 126], 5.00th=[ 167], 10.00th=[ 171], 20.00th=[ 176], 00:14:17.670 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 188], 60.00th=[ 192], 00:14:17.670 | 70.00th=[ 197], 80.00th=[ 203], 90.00th=[ 218], 95.00th=[ 226], 00:14:17.670 | 99.00th=[ 268], 99.50th=[ 288], 99.90th=[ 313], 99.95th=[ 313], 00:14:17.670 | 99.99th=[ 313] 00:14:17.670 bw ( KiB/s): min=69632, max=92672, per=4.27%, avg=84659.50, stdev=6226.48, samples=20 00:14:17.670 iops : min= 272, max= 362, avg=330.60, stdev=24.37, samples=20 00:14:17.670 lat (msec) : 50=0.03%, 100=0.18%, 250=98.10%, 500=1.69% 00:14:17.670 cpu : usr=0.21%, sys=1.28%, ctx=855, majf=0, minf=4097 00:14:17.670 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.1% 00:14:17.670 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:17.670 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:17.670 issued rwts: total=3371,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:17.670 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:17.670 job10: (groupid=0, jobs=1): err= 0: pid=79049: Sun Dec 8 05:13:05 2024 00:14:17.670 read: IOPS=1629, BW=407MiB/s (427MB/s)(4083MiB/10021msec) 00:14:17.670 slat (usec): min=17, max=16543, avg=605.47, stdev=1317.57 00:14:17.670 clat (usec): min=19692, max=89960, avg=38605.13, stdev=11831.08 00:14:17.670 lat (usec): min=22226, max=96596, avg=39210.60, stdev=11989.35 00:14:17.670 clat percentiles (usec): 00:14:17.670 | 1.00th=[27657], 5.00th=[29492], 10.00th=[30278], 20.00th=[31327], 00:14:17.670 | 30.00th=[31851], 40.00th=[32637], 50.00th=[33424], 60.00th=[34341], 00:14:17.670 | 70.00th=[36963], 80.00th=[44827], 90.00th=[60031], 95.00th=[65799], 00:14:17.670 | 99.00th=[74974], 99.50th=[79168], 99.90th=[84411], 99.95th=[86508], 00:14:17.670 | 99.99th=[89654] 00:14:17.670 bw ( KiB/s): min=257536, max=515072, per=20.99%, avg=416493.00, stdev=104021.31, samples=20 00:14:17.670 iops : min= 1006, max= 2012, avg=1626.85, stdev=406.32, samples=20 00:14:17.670 lat (msec) : 20=0.01%, 50=82.21%, 100=17.79% 00:14:17.670 cpu : usr=0.67%, sys=5.71%, ctx=3586, majf=0, minf=4097 00:14:17.670 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:14:17.670 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:17.670 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:17.670 issued rwts: total=16333,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:17.670 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:17.670 00:14:17.670 Run status group 0 (all jobs): 00:14:17.671 READ: bw=1938MiB/s (2032MB/s), 83.0MiB/s-407MiB/s (87.0MB/s-427MB/s), io=19.2GiB (20.6GB), run=10013-10143msec 00:14:17.671 00:14:17.671 Disk stats (read/write): 00:14:17.671 nvme0n1: ios=6607/0, merge=0/0, ticks=1220180/0, in_queue=1220180, util=97.65% 00:14:17.671 nvme10n1: ios=6653/0, merge=0/0, ticks=1224172/0, in_queue=1224172, util=97.95% 00:14:17.671 nvme1n1: ios=6634/0, merge=0/0, ticks=1224465/0, in_queue=1224465, util=98.03% 00:14:17.671 nvme2n1: ios=22053/0, merge=0/0, ticks=1236328/0, in_queue=1236328, util=98.10% 00:14:17.671 nvme3n1: ios=6668/0, merge=0/0, ticks=1226627/0, in_queue=1226627, util=98.19% 00:14:17.671 nvme4n1: ios=6622/0, merge=0/0, ticks=1222935/0, in_queue=1222935, util=98.34% 00:14:17.671 nvme5n1: ios=24159/0, merge=0/0, ticks=1233898/0, in_queue=1233898, util=98.47% 00:14:17.671 nvme6n1: ios=20437/0, merge=0/0, ticks=1234190/0, in_queue=1234190, util=98.65% 00:14:17.671 nvme7n1: ios=16973/0, merge=0/0, ticks=1225422/0, in_queue=1225422, util=98.87% 00:14:17.671 nvme8n1: ios=6615/0, merge=0/0, ticks=1221996/0, in_queue=1221996, util=99.04% 00:14:17.671 nvme9n1: ios=32562/0, merge=0/0, ticks=1239263/0, in_queue=1239263, util=99.08% 00:14:17.671 05:13:05 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:14:17.671 [global] 00:14:17.671 thread=1 00:14:17.671 invalidate=1 00:14:17.671 rw=randwrite 00:14:17.671 time_based=1 00:14:17.671 runtime=10 00:14:17.671 ioengine=libaio 00:14:17.671 direct=1 00:14:17.671 bs=262144 00:14:17.671 iodepth=64 00:14:17.671 norandommap=1 00:14:17.671 numjobs=1 00:14:17.671 00:14:17.671 [job0] 00:14:17.671 filename=/dev/nvme0n1 00:14:17.671 [job1] 00:14:17.671 filename=/dev/nvme10n1 00:14:17.671 [job2] 00:14:17.671 filename=/dev/nvme1n1 00:14:17.671 [job3] 00:14:17.671 filename=/dev/nvme2n1 00:14:17.671 [job4] 00:14:17.671 filename=/dev/nvme3n1 00:14:17.671 [job5] 00:14:17.671 filename=/dev/nvme4n1 00:14:17.671 [job6] 00:14:17.671 filename=/dev/nvme5n1 00:14:17.671 [job7] 00:14:17.671 filename=/dev/nvme6n1 00:14:17.671 [job8] 00:14:17.671 filename=/dev/nvme7n1 00:14:17.671 [job9] 00:14:17.671 filename=/dev/nvme8n1 00:14:17.671 [job10] 00:14:17.671 filename=/dev/nvme9n1 00:14:17.671 Could not set queue depth (nvme0n1) 00:14:17.671 Could not set queue depth (nvme10n1) 00:14:17.671 Could not set queue depth (nvme1n1) 00:14:17.671 Could not set queue depth (nvme2n1) 00:14:17.671 Could not set queue depth (nvme3n1) 00:14:17.671 Could not set queue depth (nvme4n1) 00:14:17.671 Could not set queue depth (nvme5n1) 00:14:17.671 Could not set queue depth (nvme6n1) 00:14:17.671 Could not set queue depth (nvme7n1) 00:14:17.671 Could not set queue depth (nvme8n1) 00:14:17.671 Could not set queue depth (nvme9n1) 00:14:17.671 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:17.671 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:17.671 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:17.671 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:17.671 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:17.671 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:17.671 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:17.671 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:17.671 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:17.671 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:17.671 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:17.671 fio-3.35 00:14:17.671 Starting 11 threads 00:14:27.637 00:14:27.637 job0: (groupid=0, jobs=1): err= 0: pid=79244: Sun Dec 8 05:13:16 2024 00:14:27.637 write: IOPS=686, BW=172MiB/s (180MB/s)(1732MiB/10085msec); 0 zone resets 00:14:27.637 slat (usec): min=15, max=34693, avg=1439.03, stdev=2503.12 00:14:27.637 clat (msec): min=36, max=223, avg=91.70, stdev=14.46 00:14:27.637 lat (msec): min=36, max=223, avg=93.14, stdev=14.45 00:14:27.637 clat percentiles (msec): 00:14:27.637 | 1.00th=[ 80], 5.00th=[ 82], 10.00th=[ 84], 20.00th=[ 86], 00:14:27.637 | 30.00th=[ 87], 40.00th=[ 88], 50.00th=[ 89], 60.00th=[ 90], 00:14:27.637 | 70.00th=[ 92], 80.00th=[ 94], 90.00th=[ 105], 95.00th=[ 110], 00:14:27.637 | 99.00th=[ 169], 99.50th=[ 197], 99.90th=[ 222], 99.95th=[ 224], 00:14:27.637 | 99.99th=[ 224] 00:14:27.637 bw ( KiB/s): min=110592, max=190464, per=12.51%, avg=175726.20, stdev=17938.70, samples=20 00:14:27.637 iops : min= 432, max= 744, avg=686.40, stdev=70.07, samples=20 00:14:27.637 lat (msec) : 50=0.17%, 100=86.85%, 250=12.98% 00:14:27.637 cpu : usr=1.17%, sys=1.88%, ctx=9866, majf=0, minf=1 00:14:27.637 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:14:27.637 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:27.637 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:27.637 issued rwts: total=0,6928,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:27.637 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:27.637 job1: (groupid=0, jobs=1): err= 0: pid=79245: Sun Dec 8 05:13:16 2024 00:14:27.637 write: IOPS=1100, BW=275MiB/s (289MB/s)(2767MiB/10053msec); 0 zone resets 00:14:27.637 slat (usec): min=17, max=23559, avg=898.46, stdev=1533.60 00:14:27.637 clat (msec): min=8, max=105, avg=57.21, stdev= 6.73 00:14:27.637 lat (msec): min=8, max=105, avg=58.11, stdev= 6.66 00:14:27.637 clat percentiles (msec): 00:14:27.637 | 1.00th=[ 50], 5.00th=[ 52], 10.00th=[ 53], 20.00th=[ 54], 00:14:27.638 | 30.00th=[ 55], 40.00th=[ 55], 50.00th=[ 56], 60.00th=[ 57], 00:14:27.638 | 70.00th=[ 58], 80.00th=[ 60], 90.00th=[ 65], 95.00th=[ 70], 00:14:27.638 | 99.00th=[ 84], 99.50th=[ 88], 99.90th=[ 95], 99.95th=[ 102], 00:14:27.638 | 99.99th=[ 106] 00:14:27.638 bw ( KiB/s): min=258048, max=301568, per=20.06%, avg=281728.00, stdev=14669.80, samples=20 00:14:27.638 iops : min= 1008, max= 1178, avg=1100.50, stdev=57.30, samples=20 00:14:27.638 lat (msec) : 10=0.03%, 20=0.11%, 50=1.33%, 100=98.48%, 250=0.05% 00:14:27.638 cpu : usr=1.88%, sys=3.13%, ctx=2827, majf=0, minf=1 00:14:27.638 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:14:27.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:27.638 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:27.638 issued rwts: total=0,11068,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:27.638 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:27.638 job2: (groupid=0, jobs=1): err= 0: pid=79257: Sun Dec 8 05:13:16 2024 00:14:27.638 write: IOPS=422, BW=106MiB/s (111MB/s)(1068MiB/10107msec); 0 zone resets 00:14:27.638 slat (usec): min=15, max=42379, avg=2337.25, stdev=4140.16 00:14:27.638 clat (msec): min=45, max=226, avg=149.10, stdev=25.18 00:14:27.638 lat (msec): min=45, max=226, avg=151.43, stdev=25.24 00:14:27.638 clat percentiles (msec): 00:14:27.638 | 1.00th=[ 112], 5.00th=[ 115], 10.00th=[ 121], 20.00th=[ 123], 00:14:27.638 | 30.00th=[ 131], 40.00th=[ 146], 50.00th=[ 150], 60.00th=[ 157], 00:14:27.638 | 70.00th=[ 161], 80.00th=[ 167], 90.00th=[ 186], 95.00th=[ 194], 00:14:27.638 | 99.00th=[ 215], 99.50th=[ 220], 99.90th=[ 226], 99.95th=[ 226], 00:14:27.638 | 99.99th=[ 226] 00:14:27.638 bw ( KiB/s): min=77824, max=137216, per=7.67%, avg=107699.20, stdev=16443.91, samples=20 00:14:27.638 iops : min= 304, max= 536, avg=420.70, stdev=64.23, samples=20 00:14:27.638 lat (msec) : 50=0.09%, 100=0.66%, 250=99.25% 00:14:27.638 cpu : usr=0.67%, sys=1.32%, ctx=4902, majf=0, minf=1 00:14:27.638 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:14:27.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:27.638 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:27.638 issued rwts: total=0,4270,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:27.638 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:27.638 job3: (groupid=0, jobs=1): err= 0: pid=79258: Sun Dec 8 05:13:16 2024 00:14:27.638 write: IOPS=421, BW=105MiB/s (110MB/s)(1065MiB/10117msec); 0 zone resets 00:14:27.638 slat (usec): min=17, max=31389, avg=2343.15, stdev=4139.98 00:14:27.638 clat (msec): min=20, max=230, avg=149.55, stdev=26.40 00:14:27.638 lat (msec): min=21, max=231, avg=151.89, stdev=26.50 00:14:27.638 clat percentiles (msec): 00:14:27.638 | 1.00th=[ 84], 5.00th=[ 115], 10.00th=[ 121], 20.00th=[ 123], 00:14:27.638 | 30.00th=[ 130], 40.00th=[ 148], 50.00th=[ 153], 60.00th=[ 157], 00:14:27.638 | 70.00th=[ 161], 80.00th=[ 167], 90.00th=[ 186], 95.00th=[ 194], 00:14:27.638 | 99.00th=[ 215], 99.50th=[ 222], 99.90th=[ 226], 99.95th=[ 226], 00:14:27.638 | 99.99th=[ 232] 00:14:27.638 bw ( KiB/s): min=79872, max=137216, per=7.65%, avg=107468.80, stdev=15949.52, samples=20 00:14:27.638 iops : min= 312, max= 536, avg=419.80, stdev=62.30, samples=20 00:14:27.638 lat (msec) : 50=0.56%, 100=0.66%, 250=98.78% 00:14:27.638 cpu : usr=0.73%, sys=1.23%, ctx=5046, majf=0, minf=1 00:14:27.638 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:14:27.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:27.638 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:27.638 issued rwts: total=0,4261,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:27.638 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:27.638 job4: (groupid=0, jobs=1): err= 0: pid=79259: Sun Dec 8 05:13:16 2024 00:14:27.638 write: IOPS=360, BW=90.2MiB/s (94.6MB/s)(923MiB/10235msec); 0 zone resets 00:14:27.638 slat (usec): min=17, max=49323, avg=2709.27, stdev=4887.93 00:14:27.638 clat (msec): min=52, max=425, avg=174.63, stdev=37.23 00:14:27.638 lat (msec): min=52, max=425, avg=177.33, stdev=37.29 00:14:27.638 clat percentiles (msec): 00:14:27.638 | 1.00th=[ 138], 5.00th=[ 144], 10.00th=[ 148], 20.00th=[ 153], 00:14:27.638 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 174], 00:14:27.638 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 201], 95.00th=[ 243], 00:14:27.638 | 99.00th=[ 321], 99.50th=[ 368], 99.90th=[ 414], 99.95th=[ 426], 00:14:27.638 | 99.99th=[ 426] 00:14:27.638 bw ( KiB/s): min=69632, max=108544, per=6.61%, avg=92842.50, stdev=11929.36, samples=20 00:14:27.638 iops : min= 272, max= 424, avg=362.65, stdev=46.60, samples=20 00:14:27.638 lat (msec) : 100=0.65%, 250=94.69%, 500=4.66% 00:14:27.638 cpu : usr=0.59%, sys=1.08%, ctx=4098, majf=0, minf=1 00:14:27.638 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:14:27.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:27.638 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:27.638 issued rwts: total=0,3692,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:27.638 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:27.638 job5: (groupid=0, jobs=1): err= 0: pid=79260: Sun Dec 8 05:13:16 2024 00:14:27.638 write: IOPS=423, BW=106MiB/s (111MB/s)(1070MiB/10111msec); 0 zone resets 00:14:27.638 slat (usec): min=16, max=26594, avg=2333.30, stdev=4106.57 00:14:27.638 clat (msec): min=16, max=227, avg=148.87, stdev=26.53 00:14:27.638 lat (msec): min=16, max=227, avg=151.21, stdev=26.63 00:14:27.638 clat percentiles (msec): 00:14:27.638 | 1.00th=[ 80], 5.00th=[ 115], 10.00th=[ 121], 20.00th=[ 123], 00:14:27.638 | 30.00th=[ 129], 40.00th=[ 146], 50.00th=[ 150], 60.00th=[ 157], 00:14:27.638 | 70.00th=[ 161], 80.00th=[ 167], 90.00th=[ 186], 95.00th=[ 194], 00:14:27.638 | 99.00th=[ 215], 99.50th=[ 220], 99.90th=[ 226], 99.95th=[ 226], 00:14:27.638 | 99.99th=[ 228] 00:14:27.638 bw ( KiB/s): min=79872, max=135168, per=7.68%, avg=107904.00, stdev=16169.44, samples=20 00:14:27.638 iops : min= 312, max= 528, avg=421.50, stdev=63.16, samples=20 00:14:27.638 lat (msec) : 20=0.09%, 50=0.47%, 100=0.65%, 250=98.78% 00:14:27.638 cpu : usr=0.81%, sys=1.19%, ctx=5386, majf=0, minf=1 00:14:27.638 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:14:27.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:27.638 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:27.638 issued rwts: total=0,4278,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:27.638 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:27.638 job6: (groupid=0, jobs=1): err= 0: pid=79261: Sun Dec 8 05:13:16 2024 00:14:27.638 write: IOPS=359, BW=89.9MiB/s (94.3MB/s)(921MiB/10242msec); 0 zone resets 00:14:27.638 slat (usec): min=16, max=68903, avg=2672.11, stdev=4917.48 00:14:27.638 clat (msec): min=43, max=430, avg=175.07, stdev=35.89 00:14:27.638 lat (msec): min=43, max=430, avg=177.74, stdev=35.99 00:14:27.638 clat percentiles (msec): 00:14:27.638 | 1.00th=[ 86], 5.00th=[ 144], 10.00th=[ 148], 20.00th=[ 155], 00:14:27.638 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 169], 60.00th=[ 182], 00:14:27.638 | 70.00th=[ 186], 80.00th=[ 192], 90.00th=[ 199], 95.00th=[ 211], 00:14:27.638 | 99.00th=[ 326], 99.50th=[ 372], 99.90th=[ 418], 99.95th=[ 430], 00:14:27.638 | 99.99th=[ 430] 00:14:27.638 bw ( KiB/s): min=71680, max=108544, per=6.60%, avg=92705.60, stdev=11150.76, samples=20 00:14:27.638 iops : min= 280, max= 424, avg=362.10, stdev=43.57, samples=20 00:14:27.638 lat (msec) : 50=0.08%, 100=1.17%, 250=95.39%, 500=3.36% 00:14:27.638 cpu : usr=0.77%, sys=0.98%, ctx=4451, majf=0, minf=1 00:14:27.638 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:14:27.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:27.638 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:27.638 issued rwts: total=0,3685,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:27.638 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:27.638 job7: (groupid=0, jobs=1): err= 0: pid=79262: Sun Dec 8 05:13:16 2024 00:14:27.638 write: IOPS=363, BW=90.8MiB/s (95.2MB/s)(931MiB/10258msec); 0 zone resets 00:14:27.638 slat (usec): min=18, max=34827, avg=2681.10, stdev=4784.90 00:14:27.638 clat (msec): min=18, max=435, avg=173.47, stdev=39.64 00:14:27.638 lat (msec): min=18, max=435, avg=176.15, stdev=39.76 00:14:27.638 clat percentiles (msec): 00:14:27.638 | 1.00th=[ 78], 5.00th=[ 142], 10.00th=[ 148], 20.00th=[ 153], 00:14:27.638 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 171], 00:14:27.638 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 201], 95.00th=[ 243], 00:14:27.638 | 99.00th=[ 330], 99.50th=[ 376], 99.90th=[ 426], 99.95th=[ 435], 00:14:27.638 | 99.99th=[ 435] 00:14:27.638 bw ( KiB/s): min=69632, max=110592, per=6.67%, avg=93725.30, stdev=12509.02, samples=20 00:14:27.638 iops : min= 272, max= 432, avg=366.10, stdev=48.84, samples=20 00:14:27.638 lat (msec) : 20=0.16%, 50=0.48%, 100=0.64%, 250=93.96%, 500=4.75% 00:14:27.638 cpu : usr=0.67%, sys=1.06%, ctx=3516, majf=0, minf=1 00:14:27.638 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:14:27.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:27.638 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:27.638 issued rwts: total=0,3725,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:27.638 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:27.638 job8: (groupid=0, jobs=1): err= 0: pid=79263: Sun Dec 8 05:13:16 2024 00:14:27.638 write: IOPS=361, BW=90.4MiB/s (94.8MB/s)(926MiB/10238msec); 0 zone resets 00:14:27.638 slat (usec): min=15, max=35954, avg=2697.64, stdev=4819.57 00:14:27.638 clat (msec): min=20, max=429, avg=174.14, stdev=38.82 00:14:27.638 lat (msec): min=20, max=429, avg=176.84, stdev=38.92 00:14:27.638 clat percentiles (msec): 00:14:27.638 | 1.00th=[ 92], 5.00th=[ 142], 10.00th=[ 148], 20.00th=[ 153], 00:14:27.638 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 174], 00:14:27.638 | 70.00th=[ 186], 80.00th=[ 192], 90.00th=[ 201], 95.00th=[ 243], 00:14:27.638 | 99.00th=[ 326], 99.50th=[ 372], 99.90th=[ 418], 99.95th=[ 430], 00:14:27.638 | 99.99th=[ 430] 00:14:27.638 bw ( KiB/s): min=69632, max=110080, per=6.63%, avg=93184.00, stdev=12467.46, samples=20 00:14:27.638 iops : min= 272, max= 430, avg=364.00, stdev=48.70, samples=20 00:14:27.638 lat (msec) : 50=0.51%, 100=0.54%, 250=94.14%, 500=4.81% 00:14:27.638 cpu : usr=0.59%, sys=1.04%, ctx=4512, majf=0, minf=1 00:14:27.638 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:14:27.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:27.639 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:27.639 issued rwts: total=0,3703,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:27.639 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:27.639 job9: (groupid=0, jobs=1): err= 0: pid=79264: Sun Dec 8 05:13:16 2024 00:14:27.639 write: IOPS=689, BW=172MiB/s (181MB/s)(1739MiB/10086msec); 0 zone resets 00:14:27.639 slat (usec): min=14, max=14734, avg=1432.67, stdev=2477.70 00:14:27.639 clat (msec): min=11, max=224, avg=91.35, stdev=15.16 00:14:27.639 lat (msec): min=11, max=224, avg=92.78, stdev=15.17 00:14:27.639 clat percentiles (msec): 00:14:27.639 | 1.00th=[ 77], 5.00th=[ 82], 10.00th=[ 83], 20.00th=[ 86], 00:14:27.639 | 30.00th=[ 87], 40.00th=[ 88], 50.00th=[ 89], 60.00th=[ 90], 00:14:27.639 | 70.00th=[ 91], 80.00th=[ 94], 90.00th=[ 105], 95.00th=[ 109], 00:14:27.639 | 99.00th=[ 169], 99.50th=[ 199], 99.90th=[ 222], 99.95th=[ 224], 00:14:27.639 | 99.99th=[ 226] 00:14:27.639 bw ( KiB/s): min=110592, max=190976, per=12.56%, avg=176417.40, stdev=18279.62, samples=20 00:14:27.639 iops : min= 432, max= 746, avg=689.10, stdev=71.40, samples=20 00:14:27.639 lat (msec) : 20=0.17%, 50=0.40%, 100=86.63%, 250=12.80% 00:14:27.639 cpu : usr=1.29%, sys=1.76%, ctx=6049, majf=0, minf=1 00:14:27.639 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:14:27.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:27.639 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:27.639 issued rwts: total=0,6955,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:27.639 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:27.639 job10: (groupid=0, jobs=1): err= 0: pid=79265: Sun Dec 8 05:13:16 2024 00:14:27.639 write: IOPS=362, BW=90.7MiB/s (95.1MB/s)(930MiB/10255msec); 0 zone resets 00:14:27.639 slat (usec): min=17, max=33854, avg=2684.42, stdev=4793.50 00:14:27.639 clat (msec): min=10, max=433, avg=173.60, stdev=39.80 00:14:27.639 lat (msec): min=10, max=433, avg=176.28, stdev=39.93 00:14:27.639 clat percentiles (msec): 00:14:27.639 | 1.00th=[ 73], 5.00th=[ 142], 10.00th=[ 148], 20.00th=[ 153], 00:14:27.639 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 174], 00:14:27.639 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 201], 95.00th=[ 245], 00:14:27.639 | 99.00th=[ 330], 99.50th=[ 376], 99.90th=[ 422], 99.95th=[ 435], 00:14:27.639 | 99.99th=[ 435] 00:14:27.639 bw ( KiB/s): min=69632, max=112640, per=6.67%, avg=93634.55, stdev=12348.63, samples=20 00:14:27.639 iops : min= 272, max= 440, avg=365.75, stdev=48.23, samples=20 00:14:27.639 lat (msec) : 20=0.21%, 50=0.43%, 100=0.75%, 250=93.87%, 500=4.73% 00:14:27.639 cpu : usr=0.68%, sys=1.00%, ctx=3480, majf=0, minf=1 00:14:27.639 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:14:27.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:27.639 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:27.639 issued rwts: total=0,3721,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:27.639 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:27.639 00:14:27.639 Run status group 0 (all jobs): 00:14:27.639 WRITE: bw=1372MiB/s (1438MB/s), 89.9MiB/s-275MiB/s (94.3MB/s-289MB/s), io=13.7GiB (14.8GB), run=10053-10258msec 00:14:27.639 00:14:27.639 Disk stats (read/write): 00:14:27.639 nvme0n1: ios=50/13687, merge=0/0, ticks=48/1213478, in_queue=1213526, util=97.73% 00:14:27.639 nvme10n1: ios=49/21948, merge=0/0, ticks=46/1213946, in_queue=1213992, util=97.90% 00:14:27.639 nvme1n1: ios=41/8371, merge=0/0, ticks=47/1211274, in_queue=1211321, util=97.90% 00:14:27.639 nvme2n1: ios=27/8365, merge=0/0, ticks=28/1212789, in_queue=1212817, util=98.05% 00:14:27.639 nvme3n1: ios=23/7347, merge=0/0, ticks=34/1232617, in_queue=1232651, util=97.81% 00:14:27.639 nvme4n1: ios=0/8393, merge=0/0, ticks=0/1211722, in_queue=1211722, util=98.37% 00:14:27.639 nvme5n1: ios=0/7338, merge=0/0, ticks=0/1234226, in_queue=1234226, util=98.34% 00:14:27.639 nvme6n1: ios=0/7426, merge=0/0, ticks=0/1236984, in_queue=1236984, util=98.68% 00:14:27.639 nvme7n1: ios=0/7373, merge=0/0, ticks=0/1233501, in_queue=1233501, util=98.61% 00:14:27.639 nvme8n1: ios=0/13745, merge=0/0, ticks=0/1213787, in_queue=1213787, util=98.88% 00:14:27.639 nvme9n1: ios=0/7414, merge=0/0, ticks=0/1236617, in_queue=1236617, util=98.98% 00:14:27.639 05:13:16 -- target/multiconnection.sh@36 -- # sync 00:14:27.639 05:13:16 -- target/multiconnection.sh@37 -- # seq 1 11 00:14:27.639 05:13:16 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:27.639 05:13:16 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:27.639 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:27.639 05:13:16 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:14:27.639 05:13:16 -- common/autotest_common.sh@1208 -- # local i=0 00:14:27.639 05:13:16 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:27.639 05:13:16 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK1 00:14:27.639 05:13:16 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:27.639 05:13:16 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:14:27.639 05:13:16 -- common/autotest_common.sh@1220 -- # return 0 00:14:27.639 05:13:16 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:27.639 05:13:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.639 05:13:16 -- common/autotest_common.sh@10 -- # set +x 00:14:27.639 05:13:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.639 05:13:16 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:27.639 05:13:16 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:14:27.639 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:14:27.639 05:13:16 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:14:27.639 05:13:16 -- common/autotest_common.sh@1208 -- # local i=0 00:14:27.639 05:13:16 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:27.639 05:13:16 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK2 00:14:27.639 05:13:16 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:14:27.639 05:13:16 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:27.639 05:13:16 -- common/autotest_common.sh@1220 -- # return 0 00:14:27.639 05:13:16 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:14:27.639 05:13:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.639 05:13:16 -- common/autotest_common.sh@10 -- # set +x 00:14:27.639 05:13:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.639 05:13:16 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:27.639 05:13:16 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:14:27.639 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:14:27.639 05:13:16 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:14:27.639 05:13:16 -- common/autotest_common.sh@1208 -- # local i=0 00:14:27.639 05:13:16 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:27.639 05:13:16 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK3 00:14:27.639 05:13:16 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:27.639 05:13:16 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:14:27.639 05:13:16 -- common/autotest_common.sh@1220 -- # return 0 00:14:27.639 05:13:16 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:14:27.639 05:13:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.639 05:13:16 -- common/autotest_common.sh@10 -- # set +x 00:14:27.639 05:13:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.639 05:13:16 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:27.639 05:13:16 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:14:27.639 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:14:27.639 05:13:16 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:14:27.639 05:13:16 -- common/autotest_common.sh@1208 -- # local i=0 00:14:27.639 05:13:16 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:27.639 05:13:16 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK4 00:14:27.639 05:13:16 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:27.639 05:13:16 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:14:27.639 05:13:16 -- common/autotest_common.sh@1220 -- # return 0 00:14:27.639 05:13:16 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:14:27.639 05:13:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.639 05:13:16 -- common/autotest_common.sh@10 -- # set +x 00:14:27.639 05:13:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.639 05:13:16 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:27.639 05:13:16 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:14:27.639 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:14:27.639 05:13:16 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:14:27.639 05:13:16 -- common/autotest_common.sh@1208 -- # local i=0 00:14:27.639 05:13:16 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:27.639 05:13:16 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK5 00:14:27.639 05:13:16 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:27.639 05:13:16 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:14:27.639 05:13:16 -- common/autotest_common.sh@1220 -- # return 0 00:14:27.639 05:13:16 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:14:27.639 05:13:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.639 05:13:16 -- common/autotest_common.sh@10 -- # set +x 00:14:27.639 05:13:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.639 05:13:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:27.639 05:13:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:14:27.639 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:14:27.639 05:13:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:14:27.639 05:13:17 -- common/autotest_common.sh@1208 -- # local i=0 00:14:27.639 05:13:17 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:27.639 05:13:17 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK6 00:14:27.639 05:13:17 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:14:27.639 05:13:17 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:27.639 05:13:17 -- common/autotest_common.sh@1220 -- # return 0 00:14:27.639 05:13:17 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:14:27.639 05:13:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.639 05:13:17 -- common/autotest_common.sh@10 -- # set +x 00:14:27.639 05:13:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.639 05:13:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:27.639 05:13:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:14:27.639 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:14:27.639 05:13:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:14:27.640 05:13:17 -- common/autotest_common.sh@1208 -- # local i=0 00:14:27.640 05:13:17 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:27.640 05:13:17 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK7 00:14:27.640 05:13:17 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:27.640 05:13:17 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:14:27.640 05:13:17 -- common/autotest_common.sh@1220 -- # return 0 00:14:27.640 05:13:17 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:14:27.640 05:13:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.640 05:13:17 -- common/autotest_common.sh@10 -- # set +x 00:14:27.640 05:13:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.640 05:13:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:27.640 05:13:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:14:27.640 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:14:27.640 05:13:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:14:27.640 05:13:17 -- common/autotest_common.sh@1208 -- # local i=0 00:14:27.640 05:13:17 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:27.640 05:13:17 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK8 00:14:27.640 05:13:17 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:27.640 05:13:17 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:14:27.640 05:13:17 -- common/autotest_common.sh@1220 -- # return 0 00:14:27.640 05:13:17 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:14:27.640 05:13:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.640 05:13:17 -- common/autotest_common.sh@10 -- # set +x 00:14:27.640 05:13:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.640 05:13:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:27.640 05:13:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:14:27.640 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:14:27.640 05:13:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:14:27.640 05:13:17 -- common/autotest_common.sh@1208 -- # local i=0 00:14:27.640 05:13:17 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:27.640 05:13:17 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK9 00:14:27.640 05:13:17 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:27.640 05:13:17 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:14:27.640 05:13:17 -- common/autotest_common.sh@1220 -- # return 0 00:14:27.640 05:13:17 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:14:27.640 05:13:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.640 05:13:17 -- common/autotest_common.sh@10 -- # set +x 00:14:27.640 05:13:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.640 05:13:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:27.640 05:13:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:14:27.640 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:14:27.640 05:13:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:14:27.640 05:13:17 -- common/autotest_common.sh@1208 -- # local i=0 00:14:27.640 05:13:17 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:27.640 05:13:17 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK10 00:14:27.640 05:13:17 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:27.640 05:13:17 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:14:27.640 05:13:17 -- common/autotest_common.sh@1220 -- # return 0 00:14:27.640 05:13:17 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:14:27.640 05:13:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.640 05:13:17 -- common/autotest_common.sh@10 -- # set +x 00:14:27.640 05:13:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.640 05:13:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:27.640 05:13:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:14:27.897 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:14:27.897 05:13:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:14:27.897 05:13:17 -- common/autotest_common.sh@1208 -- # local i=0 00:14:27.897 05:13:17 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK11 00:14:27.897 05:13:17 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:27.897 05:13:17 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:27.897 05:13:17 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:14:27.897 05:13:17 -- common/autotest_common.sh@1220 -- # return 0 00:14:27.897 05:13:17 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:14:27.897 05:13:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.897 05:13:17 -- common/autotest_common.sh@10 -- # set +x 00:14:27.897 05:13:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.897 05:13:17 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:14:27.897 05:13:17 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:14:27.897 05:13:17 -- target/multiconnection.sh@47 -- # nvmftestfini 00:14:27.897 05:13:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:27.897 05:13:17 -- nvmf/common.sh@116 -- # sync 00:14:27.897 05:13:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:27.897 05:13:17 -- nvmf/common.sh@119 -- # set +e 00:14:27.897 05:13:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:27.897 05:13:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:27.897 rmmod nvme_tcp 00:14:27.897 rmmod nvme_fabrics 00:14:27.897 rmmod nvme_keyring 00:14:27.897 05:13:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:27.897 05:13:17 -- nvmf/common.sh@123 -- # set -e 00:14:27.897 05:13:17 -- nvmf/common.sh@124 -- # return 0 00:14:27.897 05:13:17 -- nvmf/common.sh@477 -- # '[' -n 78583 ']' 00:14:27.897 05:13:17 -- nvmf/common.sh@478 -- # killprocess 78583 00:14:27.897 05:13:17 -- common/autotest_common.sh@936 -- # '[' -z 78583 ']' 00:14:27.897 05:13:17 -- common/autotest_common.sh@940 -- # kill -0 78583 00:14:27.897 05:13:17 -- common/autotest_common.sh@941 -- # uname 00:14:27.897 05:13:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:27.897 05:13:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78583 00:14:27.897 killing process with pid 78583 00:14:27.897 05:13:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:27.897 05:13:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:27.897 05:13:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78583' 00:14:27.897 05:13:17 -- common/autotest_common.sh@955 -- # kill 78583 00:14:27.897 05:13:17 -- common/autotest_common.sh@960 -- # wait 78583 00:14:28.155 05:13:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:28.155 05:13:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:28.155 05:13:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:28.155 05:13:17 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:28.155 05:13:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:28.155 05:13:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:28.155 05:13:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:28.155 05:13:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:28.155 05:13:17 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:28.155 00:14:28.155 real 0m48.280s 00:14:28.155 user 2m37.295s 00:14:28.155 sys 0m34.218s 00:14:28.155 05:13:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:28.155 ************************************ 00:14:28.155 END TEST nvmf_multiconnection 00:14:28.155 ************************************ 00:14:28.155 05:13:17 -- common/autotest_common.sh@10 -- # set +x 00:14:28.466 05:13:17 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:14:28.466 05:13:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:28.466 05:13:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:28.466 05:13:17 -- common/autotest_common.sh@10 -- # set +x 00:14:28.466 ************************************ 00:14:28.466 START TEST nvmf_initiator_timeout 00:14:28.466 ************************************ 00:14:28.466 05:13:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:14:28.466 * Looking for test storage... 00:14:28.466 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:28.466 05:13:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:28.466 05:13:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:28.466 05:13:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:28.466 05:13:18 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:28.466 05:13:18 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:28.466 05:13:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:28.466 05:13:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:28.466 05:13:18 -- scripts/common.sh@335 -- # IFS=.-: 00:14:28.466 05:13:18 -- scripts/common.sh@335 -- # read -ra ver1 00:14:28.466 05:13:18 -- scripts/common.sh@336 -- # IFS=.-: 00:14:28.466 05:13:18 -- scripts/common.sh@336 -- # read -ra ver2 00:14:28.466 05:13:18 -- scripts/common.sh@337 -- # local 'op=<' 00:14:28.466 05:13:18 -- scripts/common.sh@339 -- # ver1_l=2 00:14:28.467 05:13:18 -- scripts/common.sh@340 -- # ver2_l=1 00:14:28.467 05:13:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:28.467 05:13:18 -- scripts/common.sh@343 -- # case "$op" in 00:14:28.467 05:13:18 -- scripts/common.sh@344 -- # : 1 00:14:28.467 05:13:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:28.467 05:13:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:28.467 05:13:18 -- scripts/common.sh@364 -- # decimal 1 00:14:28.467 05:13:18 -- scripts/common.sh@352 -- # local d=1 00:14:28.467 05:13:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:28.467 05:13:18 -- scripts/common.sh@354 -- # echo 1 00:14:28.467 05:13:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:28.467 05:13:18 -- scripts/common.sh@365 -- # decimal 2 00:14:28.467 05:13:18 -- scripts/common.sh@352 -- # local d=2 00:14:28.467 05:13:18 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:28.467 05:13:18 -- scripts/common.sh@354 -- # echo 2 00:14:28.467 05:13:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:28.467 05:13:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:28.467 05:13:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:28.467 05:13:18 -- scripts/common.sh@367 -- # return 0 00:14:28.467 05:13:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:28.467 05:13:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:28.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.467 --rc genhtml_branch_coverage=1 00:14:28.467 --rc genhtml_function_coverage=1 00:14:28.467 --rc genhtml_legend=1 00:14:28.467 --rc geninfo_all_blocks=1 00:14:28.467 --rc geninfo_unexecuted_blocks=1 00:14:28.467 00:14:28.467 ' 00:14:28.467 05:13:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:28.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.467 --rc genhtml_branch_coverage=1 00:14:28.467 --rc genhtml_function_coverage=1 00:14:28.467 --rc genhtml_legend=1 00:14:28.467 --rc geninfo_all_blocks=1 00:14:28.467 --rc geninfo_unexecuted_blocks=1 00:14:28.467 00:14:28.467 ' 00:14:28.467 05:13:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:28.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.467 --rc genhtml_branch_coverage=1 00:14:28.467 --rc genhtml_function_coverage=1 00:14:28.467 --rc genhtml_legend=1 00:14:28.467 --rc geninfo_all_blocks=1 00:14:28.467 --rc geninfo_unexecuted_blocks=1 00:14:28.467 00:14:28.467 ' 00:14:28.467 05:13:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:28.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.467 --rc genhtml_branch_coverage=1 00:14:28.467 --rc genhtml_function_coverage=1 00:14:28.467 --rc genhtml_legend=1 00:14:28.467 --rc geninfo_all_blocks=1 00:14:28.467 --rc geninfo_unexecuted_blocks=1 00:14:28.467 00:14:28.467 ' 00:14:28.467 05:13:18 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:28.467 05:13:18 -- nvmf/common.sh@7 -- # uname -s 00:14:28.467 05:13:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:28.467 05:13:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:28.467 05:13:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:28.467 05:13:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:28.467 05:13:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:28.467 05:13:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:28.467 05:13:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:28.467 05:13:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:28.467 05:13:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:28.467 05:13:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:28.467 05:13:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:14:28.467 05:13:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:14:28.467 05:13:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:28.467 05:13:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:28.467 05:13:18 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:28.467 05:13:18 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:28.467 05:13:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:28.467 05:13:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:28.467 05:13:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:28.467 05:13:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.467 05:13:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.467 05:13:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.467 05:13:18 -- paths/export.sh@5 -- # export PATH 00:14:28.467 05:13:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.467 05:13:18 -- nvmf/common.sh@46 -- # : 0 00:14:28.467 05:13:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:28.467 05:13:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:28.467 05:13:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:28.467 05:13:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:28.467 05:13:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:28.467 05:13:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:28.467 05:13:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:28.467 05:13:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:28.467 05:13:18 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:28.467 05:13:18 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:28.467 05:13:18 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:14:28.467 05:13:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:28.467 05:13:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:28.467 05:13:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:28.467 05:13:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:28.467 05:13:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:28.467 05:13:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:28.467 05:13:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:28.467 05:13:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:28.467 05:13:18 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:28.467 05:13:18 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:28.467 05:13:18 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:28.467 05:13:18 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:28.467 05:13:18 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:28.467 05:13:18 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:28.467 05:13:18 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:28.467 05:13:18 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:28.467 05:13:18 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:28.467 05:13:18 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:28.467 05:13:18 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:28.467 05:13:18 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:28.467 05:13:18 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:28.467 05:13:18 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:28.467 05:13:18 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:28.467 05:13:18 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:28.467 05:13:18 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:28.467 05:13:18 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:28.467 05:13:18 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:28.467 05:13:18 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:28.467 Cannot find device "nvmf_tgt_br" 00:14:28.467 05:13:18 -- nvmf/common.sh@154 -- # true 00:14:28.467 05:13:18 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:28.467 Cannot find device "nvmf_tgt_br2" 00:14:28.467 05:13:18 -- nvmf/common.sh@155 -- # true 00:14:28.467 05:13:18 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:28.467 05:13:18 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:28.467 Cannot find device "nvmf_tgt_br" 00:14:28.467 05:13:18 -- nvmf/common.sh@157 -- # true 00:14:28.467 05:13:18 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:28.725 Cannot find device "nvmf_tgt_br2" 00:14:28.725 05:13:18 -- nvmf/common.sh@158 -- # true 00:14:28.725 05:13:18 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:28.725 05:13:18 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:28.725 05:13:18 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:28.725 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:28.725 05:13:18 -- nvmf/common.sh@161 -- # true 00:14:28.725 05:13:18 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:28.725 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:28.725 05:13:18 -- nvmf/common.sh@162 -- # true 00:14:28.725 05:13:18 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:28.725 05:13:18 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:28.725 05:13:18 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:28.725 05:13:18 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:28.725 05:13:18 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:28.725 05:13:18 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:28.725 05:13:18 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:28.725 05:13:18 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:28.725 05:13:18 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:28.725 05:13:18 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:28.725 05:13:18 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:28.725 05:13:18 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:28.725 05:13:18 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:28.725 05:13:18 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:28.725 05:13:18 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:28.725 05:13:18 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:28.726 05:13:18 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:28.726 05:13:18 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:28.726 05:13:18 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:28.726 05:13:18 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:28.726 05:13:18 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:28.726 05:13:18 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:28.726 05:13:18 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:28.726 05:13:18 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:28.726 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:28.726 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:14:28.726 00:14:28.726 --- 10.0.0.2 ping statistics --- 00:14:28.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:28.726 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:14:28.726 05:13:18 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:28.726 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:28.726 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:14:28.726 00:14:28.726 --- 10.0.0.3 ping statistics --- 00:14:28.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:28.726 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:14:28.726 05:13:18 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:28.726 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:28.726 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:14:28.726 00:14:28.726 --- 10.0.0.1 ping statistics --- 00:14:28.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:28.726 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:14:28.726 05:13:18 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:28.726 05:13:18 -- nvmf/common.sh@421 -- # return 0 00:14:28.726 05:13:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:28.726 05:13:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:28.726 05:13:18 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:28.726 05:13:18 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:28.726 05:13:18 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:28.726 05:13:18 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:28.726 05:13:18 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:28.984 05:13:18 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:14:28.984 05:13:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:28.984 05:13:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:28.984 05:13:18 -- common/autotest_common.sh@10 -- # set +x 00:14:28.984 05:13:18 -- nvmf/common.sh@469 -- # nvmfpid=79644 00:14:28.984 05:13:18 -- nvmf/common.sh@470 -- # waitforlisten 79644 00:14:28.984 05:13:18 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:28.984 05:13:18 -- common/autotest_common.sh@829 -- # '[' -z 79644 ']' 00:14:28.984 05:13:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.984 05:13:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:28.984 05:13:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.984 05:13:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:28.984 05:13:18 -- common/autotest_common.sh@10 -- # set +x 00:14:28.984 [2024-12-08 05:13:18.589069] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:28.984 [2024-12-08 05:13:18.589170] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:28.985 [2024-12-08 05:13:18.727007] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:28.985 [2024-12-08 05:13:18.764486] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:28.985 [2024-12-08 05:13:18.764667] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:28.985 [2024-12-08 05:13:18.764705] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:28.985 [2024-12-08 05:13:18.764719] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:28.985 [2024-12-08 05:13:18.765182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:28.985 [2024-12-08 05:13:18.765282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:28.985 [2024-12-08 05:13:18.766268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:28.985 [2024-12-08 05:13:18.766300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.243 05:13:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:29.243 05:13:18 -- common/autotest_common.sh@862 -- # return 0 00:14:29.243 05:13:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:29.243 05:13:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:29.243 05:13:18 -- common/autotest_common.sh@10 -- # set +x 00:14:29.243 05:13:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:29.243 05:13:18 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:29.243 05:13:18 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:29.243 05:13:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.243 05:13:18 -- common/autotest_common.sh@10 -- # set +x 00:14:29.243 Malloc0 00:14:29.243 05:13:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.243 05:13:18 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:14:29.243 05:13:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.243 05:13:18 -- common/autotest_common.sh@10 -- # set +x 00:14:29.243 Delay0 00:14:29.243 05:13:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.243 05:13:18 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:29.243 05:13:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.243 05:13:18 -- common/autotest_common.sh@10 -- # set +x 00:14:29.243 [2024-12-08 05:13:18.942153] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:29.243 05:13:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.243 05:13:18 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:29.243 05:13:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.243 05:13:18 -- common/autotest_common.sh@10 -- # set +x 00:14:29.243 05:13:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.243 05:13:18 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:29.243 05:13:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.243 05:13:18 -- common/autotest_common.sh@10 -- # set +x 00:14:29.243 05:13:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.243 05:13:18 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:29.243 05:13:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.243 05:13:18 -- common/autotest_common.sh@10 -- # set +x 00:14:29.243 [2024-12-08 05:13:18.970848] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:29.243 05:13:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.243 05:13:18 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 --hostid=bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:29.502 05:13:19 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:14:29.502 05:13:19 -- common/autotest_common.sh@1187 -- # local i=0 00:14:29.502 05:13:19 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:14:29.502 05:13:19 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:14:29.502 05:13:19 -- common/autotest_common.sh@1194 -- # sleep 2 00:14:31.408 05:13:21 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:14:31.408 05:13:21 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:14:31.408 05:13:21 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:14:31.408 05:13:21 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:14:31.408 05:13:21 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:14:31.408 05:13:21 -- common/autotest_common.sh@1197 -- # return 0 00:14:31.408 05:13:21 -- target/initiator_timeout.sh@35 -- # fio_pid=79695 00:14:31.408 05:13:21 -- target/initiator_timeout.sh@37 -- # sleep 3 00:14:31.408 05:13:21 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:14:31.408 [global] 00:14:31.408 thread=1 00:14:31.408 invalidate=1 00:14:31.408 rw=write 00:14:31.408 time_based=1 00:14:31.408 runtime=60 00:14:31.408 ioengine=libaio 00:14:31.408 direct=1 00:14:31.408 bs=4096 00:14:31.408 iodepth=1 00:14:31.408 norandommap=0 00:14:31.408 numjobs=1 00:14:31.408 00:14:31.408 verify_dump=1 00:14:31.408 verify_backlog=512 00:14:31.408 verify_state_save=0 00:14:31.408 do_verify=1 00:14:31.408 verify=crc32c-intel 00:14:31.408 [job0] 00:14:31.408 filename=/dev/nvme0n1 00:14:31.408 Could not set queue depth (nvme0n1) 00:14:31.666 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:31.666 fio-3.35 00:14:31.666 Starting 1 thread 00:14:34.958 05:13:24 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:14:34.958 05:13:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.958 05:13:24 -- common/autotest_common.sh@10 -- # set +x 00:14:34.958 true 00:14:34.958 05:13:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.958 05:13:24 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:14:34.958 05:13:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.958 05:13:24 -- common/autotest_common.sh@10 -- # set +x 00:14:34.958 true 00:14:34.958 05:13:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.958 05:13:24 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:14:34.958 05:13:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.958 05:13:24 -- common/autotest_common.sh@10 -- # set +x 00:14:34.958 true 00:14:34.958 05:13:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.958 05:13:24 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:14:34.958 05:13:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.958 05:13:24 -- common/autotest_common.sh@10 -- # set +x 00:14:34.958 true 00:14:34.958 05:13:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.958 05:13:24 -- target/initiator_timeout.sh@45 -- # sleep 3 00:14:37.525 05:13:27 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:14:37.525 05:13:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.525 05:13:27 -- common/autotest_common.sh@10 -- # set +x 00:14:37.525 true 00:14:37.525 05:13:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.525 05:13:27 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:14:37.525 05:13:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.525 05:13:27 -- common/autotest_common.sh@10 -- # set +x 00:14:37.525 true 00:14:37.525 05:13:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.525 05:13:27 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:14:37.525 05:13:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.525 05:13:27 -- common/autotest_common.sh@10 -- # set +x 00:14:37.525 true 00:14:37.525 05:13:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.525 05:13:27 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:14:37.525 05:13:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.525 05:13:27 -- common/autotest_common.sh@10 -- # set +x 00:14:37.525 true 00:14:37.525 05:13:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.525 05:13:27 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:14:37.525 05:13:27 -- target/initiator_timeout.sh@54 -- # wait 79695 00:15:33.737 00:15:33.737 job0: (groupid=0, jobs=1): err= 0: pid=79716: Sun Dec 8 05:14:21 2024 00:15:33.737 read: IOPS=673, BW=2692KiB/s (2757kB/s)(158MiB/60000msec) 00:15:33.737 slat (usec): min=11, max=15081, avg=20.53, stdev=84.12 00:15:33.737 clat (usec): min=167, max=40722k, avg=1247.73, stdev=202646.54 00:15:33.737 lat (usec): min=179, max=40722k, avg=1268.26, stdev=202646.53 00:15:33.737 clat percentiles (usec): 00:15:33.737 | 1.00th=[ 180], 5.00th=[ 190], 10.00th=[ 196], 20.00th=[ 204], 00:15:33.737 | 30.00th=[ 210], 40.00th=[ 217], 50.00th=[ 223], 60.00th=[ 229], 00:15:33.737 | 70.00th=[ 237], 80.00th=[ 253], 90.00th=[ 318], 95.00th=[ 383], 00:15:33.737 | 99.00th=[ 469], 99.50th=[ 490], 99.90th=[ 537], 99.95th=[ 652], 00:15:33.737 | 99.99th=[ 1188] 00:15:33.737 write: IOPS=674, BW=2697KiB/s (2761kB/s)(158MiB/60000msec); 0 zone resets 00:15:33.737 slat (usec): min=13, max=675, avg=29.00, stdev= 9.72 00:15:33.737 clat (usec): min=61, max=7947, avg=183.77, stdev=75.77 00:15:33.737 lat (usec): min=143, max=7979, avg=212.76, stdev=78.65 00:15:33.737 clat percentiles (usec): 00:15:33.737 | 1.00th=[ 133], 5.00th=[ 141], 10.00th=[ 147], 20.00th=[ 153], 00:15:33.737 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 174], 00:15:33.737 | 70.00th=[ 182], 80.00th=[ 194], 90.00th=[ 237], 95.00th=[ 310], 00:15:33.737 | 99.00th=[ 392], 99.50th=[ 416], 99.90th=[ 562], 99.95th=[ 742], 00:15:33.737 | 99.99th=[ 2704] 00:15:33.737 bw ( KiB/s): min= 464, max=11144, per=100.00%, avg=8086.97, stdev=2089.06, samples=39 00:15:33.737 iops : min= 116, max= 2786, avg=2021.74, stdev=522.27, samples=39 00:15:33.737 lat (usec) : 100=0.01%, 250=85.02%, 500=14.75%, 750=0.19%, 1000=0.02% 00:15:33.737 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, >=2000=0.01% 00:15:33.737 cpu : usr=0.64%, sys=2.61%, ctx=80839, majf=0, minf=5 00:15:33.737 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:33.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:33.737 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:33.737 issued rwts: total=40381,40448,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:33.737 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:33.737 00:15:33.737 Run status group 0 (all jobs): 00:15:33.737 READ: bw=2692KiB/s (2757kB/s), 2692KiB/s-2692KiB/s (2757kB/s-2757kB/s), io=158MiB (165MB), run=60000-60000msec 00:15:33.737 WRITE: bw=2697KiB/s (2761kB/s), 2697KiB/s-2697KiB/s (2761kB/s-2761kB/s), io=158MiB (166MB), run=60000-60000msec 00:15:33.737 00:15:33.737 Disk stats (read/write): 00:15:33.737 nvme0n1: ios=40211/40448, merge=0/0, ticks=9827/7835, in_queue=17662, util=99.76% 00:15:33.737 05:14:21 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:33.737 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:33.737 05:14:21 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:33.737 05:14:21 -- common/autotest_common.sh@1208 -- # local i=0 00:15:33.737 05:14:21 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:33.737 05:14:21 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:33.737 05:14:21 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:33.737 05:14:21 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:33.737 05:14:21 -- common/autotest_common.sh@1220 -- # return 0 00:15:33.737 05:14:21 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:15:33.737 nvmf hotplug test: fio successful as expected 00:15:33.737 05:14:21 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:15:33.738 05:14:21 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:33.738 05:14:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.738 05:14:21 -- common/autotest_common.sh@10 -- # set +x 00:15:33.738 05:14:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.738 05:14:21 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:15:33.738 05:14:21 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:15:33.738 05:14:21 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:15:33.738 05:14:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:33.738 05:14:21 -- nvmf/common.sh@116 -- # sync 00:15:33.738 05:14:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:33.738 05:14:21 -- nvmf/common.sh@119 -- # set +e 00:15:33.738 05:14:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:33.738 05:14:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:33.738 rmmod nvme_tcp 00:15:33.738 rmmod nvme_fabrics 00:15:33.738 rmmod nvme_keyring 00:15:33.738 05:14:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:33.738 05:14:21 -- nvmf/common.sh@123 -- # set -e 00:15:33.738 05:14:21 -- nvmf/common.sh@124 -- # return 0 00:15:33.738 05:14:21 -- nvmf/common.sh@477 -- # '[' -n 79644 ']' 00:15:33.738 05:14:21 -- nvmf/common.sh@478 -- # killprocess 79644 00:15:33.738 05:14:21 -- common/autotest_common.sh@936 -- # '[' -z 79644 ']' 00:15:33.738 05:14:21 -- common/autotest_common.sh@940 -- # kill -0 79644 00:15:33.738 05:14:21 -- common/autotest_common.sh@941 -- # uname 00:15:33.738 05:14:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:33.738 05:14:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79644 00:15:33.738 05:14:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:33.738 05:14:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:33.738 killing process with pid 79644 00:15:33.738 05:14:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79644' 00:15:33.738 05:14:21 -- common/autotest_common.sh@955 -- # kill 79644 00:15:33.738 05:14:21 -- common/autotest_common.sh@960 -- # wait 79644 00:15:33.738 05:14:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:33.738 05:14:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:33.738 05:14:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:33.738 05:14:21 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:33.738 05:14:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:33.738 05:14:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:33.738 05:14:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:33.738 05:14:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:33.738 05:14:21 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:33.738 00:15:33.738 real 1m3.780s 00:15:33.738 user 3m49.421s 00:15:33.738 sys 0m22.099s 00:15:33.738 05:14:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:33.738 05:14:21 -- common/autotest_common.sh@10 -- # set +x 00:15:33.738 ************************************ 00:15:33.738 END TEST nvmf_initiator_timeout 00:15:33.738 ************************************ 00:15:33.738 05:14:21 -- nvmf/nvmf.sh@69 -- # [[ virt == phy ]] 00:15:33.738 05:14:21 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:15:33.738 05:14:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:33.738 05:14:21 -- common/autotest_common.sh@10 -- # set +x 00:15:33.738 05:14:21 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:15:33.738 05:14:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:33.738 05:14:21 -- common/autotest_common.sh@10 -- # set +x 00:15:33.738 05:14:21 -- nvmf/nvmf.sh@90 -- # [[ 1 -eq 0 ]] 00:15:33.738 05:14:21 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:15:33.738 05:14:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:33.738 05:14:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:33.738 05:14:21 -- common/autotest_common.sh@10 -- # set +x 00:15:33.738 ************************************ 00:15:33.738 START TEST nvmf_identify 00:15:33.738 ************************************ 00:15:33.738 05:14:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:15:33.738 * Looking for test storage... 00:15:33.738 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:33.738 05:14:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:33.738 05:14:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:33.738 05:14:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:33.738 05:14:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:33.738 05:14:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:33.738 05:14:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:33.738 05:14:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:33.738 05:14:21 -- scripts/common.sh@335 -- # IFS=.-: 00:15:33.738 05:14:21 -- scripts/common.sh@335 -- # read -ra ver1 00:15:33.738 05:14:21 -- scripts/common.sh@336 -- # IFS=.-: 00:15:33.738 05:14:21 -- scripts/common.sh@336 -- # read -ra ver2 00:15:33.738 05:14:21 -- scripts/common.sh@337 -- # local 'op=<' 00:15:33.738 05:14:21 -- scripts/common.sh@339 -- # ver1_l=2 00:15:33.738 05:14:21 -- scripts/common.sh@340 -- # ver2_l=1 00:15:33.738 05:14:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:33.738 05:14:21 -- scripts/common.sh@343 -- # case "$op" in 00:15:33.738 05:14:21 -- scripts/common.sh@344 -- # : 1 00:15:33.738 05:14:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:33.738 05:14:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:33.738 05:14:21 -- scripts/common.sh@364 -- # decimal 1 00:15:33.738 05:14:21 -- scripts/common.sh@352 -- # local d=1 00:15:33.738 05:14:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:33.738 05:14:21 -- scripts/common.sh@354 -- # echo 1 00:15:33.738 05:14:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:33.738 05:14:21 -- scripts/common.sh@365 -- # decimal 2 00:15:33.738 05:14:21 -- scripts/common.sh@352 -- # local d=2 00:15:33.738 05:14:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:33.738 05:14:21 -- scripts/common.sh@354 -- # echo 2 00:15:33.738 05:14:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:33.738 05:14:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:33.738 05:14:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:33.738 05:14:21 -- scripts/common.sh@367 -- # return 0 00:15:33.738 05:14:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:33.738 05:14:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:33.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.738 --rc genhtml_branch_coverage=1 00:15:33.738 --rc genhtml_function_coverage=1 00:15:33.738 --rc genhtml_legend=1 00:15:33.738 --rc geninfo_all_blocks=1 00:15:33.738 --rc geninfo_unexecuted_blocks=1 00:15:33.738 00:15:33.738 ' 00:15:33.738 05:14:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:33.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.738 --rc genhtml_branch_coverage=1 00:15:33.738 --rc genhtml_function_coverage=1 00:15:33.738 --rc genhtml_legend=1 00:15:33.738 --rc geninfo_all_blocks=1 00:15:33.738 --rc geninfo_unexecuted_blocks=1 00:15:33.738 00:15:33.738 ' 00:15:33.738 05:14:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:33.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.738 --rc genhtml_branch_coverage=1 00:15:33.738 --rc genhtml_function_coverage=1 00:15:33.738 --rc genhtml_legend=1 00:15:33.738 --rc geninfo_all_blocks=1 00:15:33.738 --rc geninfo_unexecuted_blocks=1 00:15:33.738 00:15:33.738 ' 00:15:33.738 05:14:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:33.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.738 --rc genhtml_branch_coverage=1 00:15:33.738 --rc genhtml_function_coverage=1 00:15:33.738 --rc genhtml_legend=1 00:15:33.738 --rc geninfo_all_blocks=1 00:15:33.738 --rc geninfo_unexecuted_blocks=1 00:15:33.738 00:15:33.738 ' 00:15:33.738 05:14:21 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:33.738 05:14:21 -- nvmf/common.sh@7 -- # uname -s 00:15:33.738 05:14:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:33.738 05:14:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:33.738 05:14:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:33.738 05:14:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:33.738 05:14:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:33.738 05:14:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:33.738 05:14:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:33.738 05:14:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:33.738 05:14:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:33.738 05:14:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:33.738 05:14:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:15:33.738 05:14:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:15:33.738 05:14:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:33.738 05:14:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:33.738 05:14:21 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:33.738 05:14:21 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:33.738 05:14:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:33.738 05:14:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:33.738 05:14:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:33.738 05:14:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.738 05:14:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.738 05:14:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.738 05:14:21 -- paths/export.sh@5 -- # export PATH 00:15:33.738 05:14:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.738 05:14:21 -- nvmf/common.sh@46 -- # : 0 00:15:33.738 05:14:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:33.738 05:14:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:33.738 05:14:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:33.738 05:14:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:33.738 05:14:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:33.738 05:14:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:33.738 05:14:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:33.738 05:14:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:33.738 05:14:22 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:33.738 05:14:22 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:33.738 05:14:22 -- host/identify.sh@14 -- # nvmftestinit 00:15:33.738 05:14:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:33.738 05:14:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:33.738 05:14:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:33.738 05:14:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:33.738 05:14:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:33.738 05:14:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:33.738 05:14:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:33.738 05:14:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:33.738 05:14:22 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:33.738 05:14:22 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:33.738 05:14:22 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:33.738 05:14:22 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:33.738 05:14:22 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:33.738 05:14:22 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:33.738 05:14:22 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:33.738 05:14:22 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:33.738 05:14:22 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:33.738 05:14:22 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:33.738 05:14:22 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:33.738 05:14:22 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:33.738 05:14:22 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:33.738 05:14:22 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:33.738 05:14:22 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:33.738 05:14:22 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:33.739 05:14:22 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:33.739 05:14:22 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:33.739 05:14:22 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:33.739 05:14:22 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:33.739 Cannot find device "nvmf_tgt_br" 00:15:33.739 05:14:22 -- nvmf/common.sh@154 -- # true 00:15:33.739 05:14:22 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:33.739 Cannot find device "nvmf_tgt_br2" 00:15:33.739 05:14:22 -- nvmf/common.sh@155 -- # true 00:15:33.739 05:14:22 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:33.739 05:14:22 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:33.739 Cannot find device "nvmf_tgt_br" 00:15:33.739 05:14:22 -- nvmf/common.sh@157 -- # true 00:15:33.739 05:14:22 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:33.739 Cannot find device "nvmf_tgt_br2" 00:15:33.739 05:14:22 -- nvmf/common.sh@158 -- # true 00:15:33.739 05:14:22 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:33.739 05:14:22 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:33.739 05:14:22 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:33.739 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:33.739 05:14:22 -- nvmf/common.sh@161 -- # true 00:15:33.739 05:14:22 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:33.739 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:33.739 05:14:22 -- nvmf/common.sh@162 -- # true 00:15:33.739 05:14:22 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:33.739 05:14:22 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:33.739 05:14:22 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:33.739 05:14:22 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:33.739 05:14:22 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:33.739 05:14:22 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:33.739 05:14:22 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:33.739 05:14:22 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:33.739 05:14:22 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:33.739 05:14:22 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:33.739 05:14:22 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:33.739 05:14:22 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:33.739 05:14:22 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:33.739 05:14:22 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:33.739 05:14:22 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:33.739 05:14:22 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:33.739 05:14:22 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:33.739 05:14:22 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:33.739 05:14:22 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:33.739 05:14:22 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:33.739 05:14:22 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:33.739 05:14:22 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:33.739 05:14:22 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:33.739 05:14:22 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:33.739 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:33.739 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:15:33.739 00:15:33.739 --- 10.0.0.2 ping statistics --- 00:15:33.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.739 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:15:33.739 05:14:22 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:33.739 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:33.739 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:15:33.739 00:15:33.739 --- 10.0.0.3 ping statistics --- 00:15:33.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.739 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:15:33.739 05:14:22 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:33.739 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:33.739 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:15:33.739 00:15:33.739 --- 10.0.0.1 ping statistics --- 00:15:33.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.739 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:15:33.739 05:14:22 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:33.739 05:14:22 -- nvmf/common.sh@421 -- # return 0 00:15:33.739 05:14:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:33.739 05:14:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:33.739 05:14:22 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:33.739 05:14:22 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:33.739 05:14:22 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:33.739 05:14:22 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:33.739 05:14:22 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:33.739 05:14:22 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:15:33.739 05:14:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:33.739 05:14:22 -- common/autotest_common.sh@10 -- # set +x 00:15:33.739 05:14:22 -- host/identify.sh@19 -- # nvmfpid=80557 00:15:33.739 05:14:22 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:33.739 05:14:22 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:33.739 05:14:22 -- host/identify.sh@23 -- # waitforlisten 80557 00:15:33.739 05:14:22 -- common/autotest_common.sh@829 -- # '[' -z 80557 ']' 00:15:33.739 05:14:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:33.739 05:14:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:33.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:33.739 05:14:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:33.739 05:14:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:33.739 05:14:22 -- common/autotest_common.sh@10 -- # set +x 00:15:33.739 [2024-12-08 05:14:22.422467] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:33.739 [2024-12-08 05:14:22.422562] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:33.739 [2024-12-08 05:14:22.572341] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:33.739 [2024-12-08 05:14:22.608243] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:33.739 [2024-12-08 05:14:22.608394] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:33.739 [2024-12-08 05:14:22.608407] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:33.739 [2024-12-08 05:14:22.608416] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:33.739 [2024-12-08 05:14:22.608513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:33.739 [2024-12-08 05:14:22.608651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:33.739 [2024-12-08 05:14:22.608851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:33.739 [2024-12-08 05:14:22.608871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.739 05:14:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:33.739 05:14:23 -- common/autotest_common.sh@862 -- # return 0 00:15:33.739 05:14:23 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:33.739 05:14:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.739 05:14:23 -- common/autotest_common.sh@10 -- # set +x 00:15:33.739 [2024-12-08 05:14:23.469719] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:33.739 05:14:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.739 05:14:23 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:15:33.739 05:14:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:33.739 05:14:23 -- common/autotest_common.sh@10 -- # set +x 00:15:33.739 05:14:23 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:33.739 05:14:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.739 05:14:23 -- common/autotest_common.sh@10 -- # set +x 00:15:33.996 Malloc0 00:15:33.996 05:14:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.996 05:14:23 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:33.996 05:14:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.996 05:14:23 -- common/autotest_common.sh@10 -- # set +x 00:15:33.996 05:14:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.996 05:14:23 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:15:33.996 05:14:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.996 05:14:23 -- common/autotest_common.sh@10 -- # set +x 00:15:33.996 05:14:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.996 05:14:23 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:33.996 05:14:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.996 05:14:23 -- common/autotest_common.sh@10 -- # set +x 00:15:33.996 [2024-12-08 05:14:23.553725] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:33.996 05:14:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.996 05:14:23 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:33.996 05:14:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.996 05:14:23 -- common/autotest_common.sh@10 -- # set +x 00:15:33.996 05:14:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.996 05:14:23 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:15:33.996 05:14:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.996 05:14:23 -- common/autotest_common.sh@10 -- # set +x 00:15:33.996 [2024-12-08 05:14:23.569462] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:15:33.996 [ 00:15:33.996 { 00:15:33.996 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:33.996 "subtype": "Discovery", 00:15:33.996 "listen_addresses": [ 00:15:33.996 { 00:15:33.996 "transport": "TCP", 00:15:33.997 "trtype": "TCP", 00:15:33.997 "adrfam": "IPv4", 00:15:33.997 "traddr": "10.0.0.2", 00:15:33.997 "trsvcid": "4420" 00:15:33.997 } 00:15:33.997 ], 00:15:33.997 "allow_any_host": true, 00:15:33.997 "hosts": [] 00:15:33.997 }, 00:15:33.997 { 00:15:33.997 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:33.997 "subtype": "NVMe", 00:15:33.997 "listen_addresses": [ 00:15:33.997 { 00:15:33.997 "transport": "TCP", 00:15:33.997 "trtype": "TCP", 00:15:33.997 "adrfam": "IPv4", 00:15:33.997 "traddr": "10.0.0.2", 00:15:33.997 "trsvcid": "4420" 00:15:33.997 } 00:15:33.997 ], 00:15:33.997 "allow_any_host": true, 00:15:33.997 "hosts": [], 00:15:33.997 "serial_number": "SPDK00000000000001", 00:15:33.997 "model_number": "SPDK bdev Controller", 00:15:33.997 "max_namespaces": 32, 00:15:33.997 "min_cntlid": 1, 00:15:33.997 "max_cntlid": 65519, 00:15:33.997 "namespaces": [ 00:15:33.997 { 00:15:33.997 "nsid": 1, 00:15:33.997 "bdev_name": "Malloc0", 00:15:33.997 "name": "Malloc0", 00:15:33.997 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:15:33.997 "eui64": "ABCDEF0123456789", 00:15:33.997 "uuid": "31dae611-b83f-4040-94c6-ca1cb0b9d4b1" 00:15:33.997 } 00:15:33.997 ] 00:15:33.997 } 00:15:33.997 ] 00:15:33.997 05:14:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.997 05:14:23 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:15:33.997 [2024-12-08 05:14:23.617966] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:33.997 [2024-12-08 05:14:23.618065] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80602 ] 00:15:33.997 [2024-12-08 05:14:23.771169] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:15:33.997 [2024-12-08 05:14:23.771255] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:15:33.997 [2024-12-08 05:14:23.771263] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:15:33.997 [2024-12-08 05:14:23.771276] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:15:33.997 [2024-12-08 05:14:23.771291] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:15:33.997 [2024-12-08 05:14:23.771450] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:15:33.997 [2024-12-08 05:14:23.771513] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1838510 0 00:15:33.997 [2024-12-08 05:14:23.778697] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:15:33.997 [2024-12-08 05:14:23.778727] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:15:33.997 [2024-12-08 05:14:23.778733] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:15:33.997 [2024-12-08 05:14:23.778738] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:15:33.997 [2024-12-08 05:14:23.778785] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:33.997 [2024-12-08 05:14:23.778792] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:33.997 [2024-12-08 05:14:23.778797] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1838510) 00:15:33.997 [2024-12-08 05:14:23.778813] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:33.997 [2024-12-08 05:14:23.778850] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18848a0, cid 0, qid 0 00:15:34.262 [2024-12-08 05:14:23.786753] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.262 [2024-12-08 05:14:23.786820] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.262 [2024-12-08 05:14:23.786831] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.262 [2024-12-08 05:14:23.786841] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18848a0) on tqpair=0x1838510 00:15:34.262 [2024-12-08 05:14:23.786870] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:15:34.262 [2024-12-08 05:14:23.786886] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:15:34.262 [2024-12-08 05:14:23.786897] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:15:34.262 [2024-12-08 05:14:23.786935] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.262 [2024-12-08 05:14:23.786945] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.262 [2024-12-08 05:14:23.786953] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1838510) 00:15:34.262 [2024-12-08 05:14:23.786976] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.262 [2024-12-08 05:14:23.787045] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18848a0, cid 0, qid 0 00:15:34.262 [2024-12-08 05:14:23.787147] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.262 [2024-12-08 05:14:23.787161] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.262 [2024-12-08 05:14:23.787167] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.262 [2024-12-08 05:14:23.787174] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18848a0) on tqpair=0x1838510 00:15:34.263 [2024-12-08 05:14:23.787185] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:15:34.263 [2024-12-08 05:14:23.787197] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:15:34.263 [2024-12-08 05:14:23.787209] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.263 [2024-12-08 05:14:23.787217] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.263 [2024-12-08 05:14:23.787223] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1838510) 00:15:34.263 [2024-12-08 05:14:23.787236] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.263 [2024-12-08 05:14:23.787269] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18848a0, cid 0, qid 0 00:15:34.263 [2024-12-08 05:14:23.787326] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.263 [2024-12-08 05:14:23.787338] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.263 [2024-12-08 05:14:23.787344] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.263 [2024-12-08 05:14:23.787351] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18848a0) on tqpair=0x1838510 00:15:34.263 [2024-12-08 05:14:23.787362] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:15:34.263 [2024-12-08 05:14:23.787376] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:15:34.263 [2024-12-08 05:14:23.787387] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.263 [2024-12-08 05:14:23.787395] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.263 [2024-12-08 05:14:23.787401] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1838510) 00:15:34.263 [2024-12-08 05:14:23.787413] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.263 [2024-12-08 05:14:23.787461] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18848a0, cid 0, qid 0 00:15:34.263 [2024-12-08 05:14:23.787518] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.263 [2024-12-08 05:14:23.787531] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.263 [2024-12-08 05:14:23.787538] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.263 [2024-12-08 05:14:23.787543] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18848a0) on tqpair=0x1838510 00:15:34.263 [2024-12-08 05:14:23.787552] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:34.263 [2024-12-08 05:14:23.787565] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.263 [2024-12-08 05:14:23.787570] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.263 [2024-12-08 05:14:23.787574] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1838510) 00:15:34.263 [2024-12-08 05:14:23.787582] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.263 [2024-12-08 05:14:23.787604] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18848a0, cid 0, qid 0 00:15:34.263 [2024-12-08 05:14:23.787653] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.263 [2024-12-08 05:14:23.787660] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.263 [2024-12-08 05:14:23.787664] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.263 [2024-12-08 05:14:23.787669] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18848a0) on tqpair=0x1838510 00:15:34.263 [2024-12-08 05:14:23.787688] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:15:34.263 [2024-12-08 05:14:23.787696] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:15:34.263 [2024-12-08 05:14:23.787705] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:34.263 [2024-12-08 05:14:23.787811] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:15:34.263 [2024-12-08 05:14:23.787816] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:34.263 [2024-12-08 05:14:23.787827] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.263 [2024-12-08 05:14:23.787832] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.263 [2024-12-08 05:14:23.787836] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1838510) 00:15:34.263 [2024-12-08 05:14:23.787844] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.263 [2024-12-08 05:14:23.787875] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18848a0, cid 0, qid 0 00:15:34.263 [2024-12-08 05:14:23.787947] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.263 [2024-12-08 05:14:23.787955] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.263 [2024-12-08 05:14:23.787959] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.263 [2024-12-08 05:14:23.787963] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18848a0) on tqpair=0x1838510 00:15:34.263 [2024-12-08 05:14:23.787970] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:34.263 [2024-12-08 05:14:23.787981] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.263 [2024-12-08 05:14:23.787986] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.263 [2024-12-08 05:14:23.787990] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1838510) 00:15:34.263 [2024-12-08 05:14:23.787998] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.263 [2024-12-08 05:14:23.788018] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18848a0, cid 0, qid 0 00:15:34.263 [2024-12-08 05:14:23.788070] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.263 [2024-12-08 05:14:23.788077] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.263 [2024-12-08 05:14:23.788081] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.263 [2024-12-08 05:14:23.788085] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18848a0) on tqpair=0x1838510 00:15:34.263 [2024-12-08 05:14:23.788091] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:34.263 [2024-12-08 05:14:23.788097] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:15:34.263 [2024-12-08 05:14:23.788106] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:15:34.263 [2024-12-08 05:14:23.788126] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:15:34.263 [2024-12-08 05:14:23.788140] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.263 [2024-12-08 05:14:23.788145] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.263 [2024-12-08 05:14:23.788149] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1838510) 00:15:34.263 [2024-12-08 05:14:23.788157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.263 [2024-12-08 05:14:23.788177] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18848a0, cid 0, qid 0 00:15:34.263 [2024-12-08 05:14:23.788292] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:34.263 [2024-12-08 05:14:23.788300] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:34.263 [2024-12-08 05:14:23.788304] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:34.263 [2024-12-08 05:14:23.788309] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1838510): datao=0, datal=4096, cccid=0 00:15:34.263 [2024-12-08 05:14:23.788314] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18848a0) on tqpair(0x1838510): expected_datao=0, payload_size=4096 00:15:34.263 [2024-12-08 05:14:23.788325] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:34.263 [2024-12-08 05:14:23.788330] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:34.263 [2024-12-08 05:14:23.788339] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.263 [2024-12-08 05:14:23.788345] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.263 [2024-12-08 05:14:23.788349] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.263 [2024-12-08 05:14:23.788353] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18848a0) on tqpair=0x1838510 00:15:34.263 [2024-12-08 05:14:23.788365] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:15:34.263 [2024-12-08 05:14:23.788371] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:15:34.263 [2024-12-08 05:14:23.788376] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:15:34.263 [2024-12-08 05:14:23.788382] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:15:34.263 [2024-12-08 05:14:23.788387] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:15:34.263 [2024-12-08 05:14:23.788393] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:15:34.263 [2024-12-08 05:14:23.788407] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:15:34.263 [2024-12-08 05:14:23.788416] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.263 [2024-12-08 05:14:23.788420] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.264 [2024-12-08 05:14:23.788425] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1838510) 00:15:34.264 [2024-12-08 05:14:23.788433] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:34.264 [2024-12-08 05:14:23.788453] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18848a0, cid 0, qid 0 00:15:34.264 [2024-12-08 05:14:23.788511] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.264 [2024-12-08 05:14:23.788519] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.264 [2024-12-08 05:14:23.788523] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.264 [2024-12-08 05:14:23.788527] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18848a0) on tqpair=0x1838510 00:15:34.264 [2024-12-08 05:14:23.788536] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.264 [2024-12-08 05:14:23.788541] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.264 [2024-12-08 05:14:23.788545] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1838510) 00:15:34.264 [2024-12-08 05:14:23.788552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:34.264 [2024-12-08 05:14:23.788559] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.264 [2024-12-08 05:14:23.788563] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.264 [2024-12-08 05:14:23.788567] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1838510) 00:15:34.264 [2024-12-08 05:14:23.788574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:34.264 [2024-12-08 05:14:23.788580] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.264 [2024-12-08 05:14:23.788584] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.264 [2024-12-08 05:14:23.788588] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1838510) 00:15:34.264 [2024-12-08 05:14:23.788595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:34.264 [2024-12-08 05:14:23.788601] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.264 [2024-12-08 05:14:23.788605] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.264 [2024-12-08 05:14:23.788609] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1838510) 00:15:34.264 [2024-12-08 05:14:23.788616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:34.264 [2024-12-08 05:14:23.788621] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:15:34.264 [2024-12-08 05:14:23.788635] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:34.264 [2024-12-08 05:14:23.788643] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.264 [2024-12-08 05:14:23.788647] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.264 [2024-12-08 05:14:23.788651] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1838510) 00:15:34.264 [2024-12-08 05:14:23.788658] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.264 [2024-12-08 05:14:23.788694] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18848a0, cid 0, qid 0 00:15:34.264 [2024-12-08 05:14:23.788704] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1884a00, cid 1, qid 0 00:15:34.264 [2024-12-08 05:14:23.788710] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1884b60, cid 2, qid 0 00:15:34.264 [2024-12-08 05:14:23.788715] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1884cc0, cid 3, qid 0 00:15:34.264 [2024-12-08 05:14:23.788720] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1884e20, cid 4, qid 0 00:15:34.264 [2024-12-08 05:14:23.788817] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.264 [2024-12-08 05:14:23.788824] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.264 [2024-12-08 05:14:23.788828] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.264 [2024-12-08 05:14:23.788833] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1884e20) on tqpair=0x1838510 00:15:34.264 [2024-12-08 05:14:23.788840] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:15:34.264 [2024-12-08 05:14:23.788846] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:15:34.264 [2024-12-08 05:14:23.788863] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.264 [2024-12-08 05:14:23.788872] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.264 [2024-12-08 05:14:23.788879] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1838510) 00:15:34.264 [2024-12-08 05:14:23.788890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.264 [2024-12-08 05:14:23.788917] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1884e20, cid 4, qid 0 00:15:34.264 [2024-12-08 05:14:23.788986] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:34.264 [2024-12-08 05:14:23.788995] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:34.264 [2024-12-08 05:14:23.788999] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:34.264 [2024-12-08 05:14:23.789003] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1838510): datao=0, datal=4096, cccid=4 00:15:34.264 [2024-12-08 05:14:23.789008] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1884e20) on tqpair(0x1838510): expected_datao=0, payload_size=4096 00:15:34.264 [2024-12-08 05:14:23.789017] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:34.264 [2024-12-08 05:14:23.789022] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:34.264 [2024-12-08 05:14:23.789031] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.264 [2024-12-08 05:14:23.789037] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.264 [2024-12-08 05:14:23.789041] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.264 [2024-12-08 05:14:23.789046] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1884e20) on tqpair=0x1838510 00:15:34.264 [2024-12-08 05:14:23.789062] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:15:34.264 [2024-12-08 05:14:23.789095] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.264 [2024-12-08 05:14:23.789101] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.264 [2024-12-08 05:14:23.789105] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1838510) 00:15:34.264 [2024-12-08 05:14:23.789113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.264 [2024-12-08 05:14:23.789121] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.264 [2024-12-08 05:14:23.789125] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.264 [2024-12-08 05:14:23.789129] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1838510) 00:15:34.264 [2024-12-08 05:14:23.789136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:15:34.264 [2024-12-08 05:14:23.789161] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1884e20, cid 4, qid 0 00:15:34.264 [2024-12-08 05:14:23.789168] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1884f80, cid 5, qid 0 00:15:34.264 [2024-12-08 05:14:23.789295] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:34.264 [2024-12-08 05:14:23.789313] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:34.264 [2024-12-08 05:14:23.789318] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:34.264 [2024-12-08 05:14:23.789322] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1838510): datao=0, datal=1024, cccid=4 00:15:34.264 [2024-12-08 05:14:23.789327] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1884e20) on tqpair(0x1838510): expected_datao=0, payload_size=1024 00:15:34.264 [2024-12-08 05:14:23.789336] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:34.264 [2024-12-08 05:14:23.789341] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:34.264 [2024-12-08 05:14:23.789347] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.264 [2024-12-08 05:14:23.789353] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.264 [2024-12-08 05:14:23.789357] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.264 [2024-12-08 05:14:23.789362] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1884f80) on tqpair=0x1838510 00:15:34.264 [2024-12-08 05:14:23.789383] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.264 [2024-12-08 05:14:23.789391] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.264 [2024-12-08 05:14:23.789395] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.264 [2024-12-08 05:14:23.789399] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1884e20) on tqpair=0x1838510 00:15:34.264 [2024-12-08 05:14:23.789412] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.264 [2024-12-08 05:14:23.789417] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.264 [2024-12-08 05:14:23.789421] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1838510) 00:15:34.264 [2024-12-08 05:14:23.789429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.264 [2024-12-08 05:14:23.789454] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1884e20, cid 4, qid 0 00:15:34.264 [2024-12-08 05:14:23.789547] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:34.264 [2024-12-08 05:14:23.789559] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:34.264 [2024-12-08 05:14:23.789564] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:34.264 [2024-12-08 05:14:23.789568] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1838510): datao=0, datal=3072, cccid=4 00:15:34.264 [2024-12-08 05:14:23.789573] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1884e20) on tqpair(0x1838510): expected_datao=0, payload_size=3072 00:15:34.264 [2024-12-08 05:14:23.789581] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:34.264 [2024-12-08 05:14:23.789586] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:34.264 [2024-12-08 05:14:23.789595] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.264 [2024-12-08 05:14:23.789601] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.264 [2024-12-08 05:14:23.789605] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.264 [2024-12-08 05:14:23.789609] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1884e20) on tqpair=0x1838510 00:15:34.264 [2024-12-08 05:14:23.789620] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.264 [2024-12-08 05:14:23.789625] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.264 [2024-12-08 05:14:23.789629] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1838510) 00:15:34.264 [2024-12-08 05:14:23.789636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.265 [2024-12-08 05:14:23.789660] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1884e20, cid 4, qid 0 00:15:34.265 [2024-12-08 05:14:23.789748] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:34.265 [2024-12-08 05:14:23.789758] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:34.265 [2024-12-08 05:14:23.789762] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:34.265 [2024-12-08 05:14:23.789766] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1838510): datao=0, datal=8, cccid=4 00:15:34.265 [2024-12-08 05:14:23.789771] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1884e20) on tqpair(0x1838510): expected_datao=0, payload_size=8 00:15:34.265 [2024-12-08 05:14:23.789779] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:34.265 [2024-12-08 05:14:23.789783] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:34.265 [2024-12-08 05:14:23.789800] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.265 [2024-12-08 05:14:23.789808] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.265 [2024-12-08 05:14:23.789812] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.265 [2024-12-08 05:14:23.789816] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1884e20) on tqpair=0x1838510 00:15:34.265 ===================================================== 00:15:34.265 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:15:34.265 ===================================================== 00:15:34.265 Controller Capabilities/Features 00:15:34.265 ================================ 00:15:34.265 Vendor ID: 0000 00:15:34.265 Subsystem Vendor ID: 0000 00:15:34.265 Serial Number: .................... 00:15:34.265 Model Number: ........................................ 00:15:34.265 Firmware Version: 24.01.1 00:15:34.265 Recommended Arb Burst: 0 00:15:34.265 IEEE OUI Identifier: 00 00 00 00:15:34.265 Multi-path I/O 00:15:34.265 May have multiple subsystem ports: No 00:15:34.265 May have multiple controllers: No 00:15:34.265 Associated with SR-IOV VF: No 00:15:34.265 Max Data Transfer Size: 131072 00:15:34.265 Max Number of Namespaces: 0 00:15:34.265 Max Number of I/O Queues: 1024 00:15:34.265 NVMe Specification Version (VS): 1.3 00:15:34.265 NVMe Specification Version (Identify): 1.3 00:15:34.265 Maximum Queue Entries: 128 00:15:34.265 Contiguous Queues Required: Yes 00:15:34.265 Arbitration Mechanisms Supported 00:15:34.265 Weighted Round Robin: Not Supported 00:15:34.265 Vendor Specific: Not Supported 00:15:34.265 Reset Timeout: 15000 ms 00:15:34.265 Doorbell Stride: 4 bytes 00:15:34.265 NVM Subsystem Reset: Not Supported 00:15:34.265 Command Sets Supported 00:15:34.265 NVM Command Set: Supported 00:15:34.265 Boot Partition: Not Supported 00:15:34.265 Memory Page Size Minimum: 4096 bytes 00:15:34.265 Memory Page Size Maximum: 4096 bytes 00:15:34.265 Persistent Memory Region: Not Supported 00:15:34.265 Optional Asynchronous Events Supported 00:15:34.265 Namespace Attribute Notices: Not Supported 00:15:34.265 Firmware Activation Notices: Not Supported 00:15:34.265 ANA Change Notices: Not Supported 00:15:34.265 PLE Aggregate Log Change Notices: Not Supported 00:15:34.265 LBA Status Info Alert Notices: Not Supported 00:15:34.265 EGE Aggregate Log Change Notices: Not Supported 00:15:34.265 Normal NVM Subsystem Shutdown event: Not Supported 00:15:34.265 Zone Descriptor Change Notices: Not Supported 00:15:34.265 Discovery Log Change Notices: Supported 00:15:34.265 Controller Attributes 00:15:34.265 128-bit Host Identifier: Not Supported 00:15:34.265 Non-Operational Permissive Mode: Not Supported 00:15:34.265 NVM Sets: Not Supported 00:15:34.265 Read Recovery Levels: Not Supported 00:15:34.265 Endurance Groups: Not Supported 00:15:34.265 Predictable Latency Mode: Not Supported 00:15:34.265 Traffic Based Keep ALive: Not Supported 00:15:34.265 Namespace Granularity: Not Supported 00:15:34.265 SQ Associations: Not Supported 00:15:34.265 UUID List: Not Supported 00:15:34.265 Multi-Domain Subsystem: Not Supported 00:15:34.265 Fixed Capacity Management: Not Supported 00:15:34.265 Variable Capacity Management: Not Supported 00:15:34.265 Delete Endurance Group: Not Supported 00:15:34.265 Delete NVM Set: Not Supported 00:15:34.265 Extended LBA Formats Supported: Not Supported 00:15:34.265 Flexible Data Placement Supported: Not Supported 00:15:34.265 00:15:34.265 Controller Memory Buffer Support 00:15:34.265 ================================ 00:15:34.265 Supported: No 00:15:34.265 00:15:34.265 Persistent Memory Region Support 00:15:34.265 ================================ 00:15:34.265 Supported: No 00:15:34.265 00:15:34.265 Admin Command Set Attributes 00:15:34.265 ============================ 00:15:34.265 Security Send/Receive: Not Supported 00:15:34.265 Format NVM: Not Supported 00:15:34.265 Firmware Activate/Download: Not Supported 00:15:34.265 Namespace Management: Not Supported 00:15:34.265 Device Self-Test: Not Supported 00:15:34.265 Directives: Not Supported 00:15:34.265 NVMe-MI: Not Supported 00:15:34.265 Virtualization Management: Not Supported 00:15:34.265 Doorbell Buffer Config: Not Supported 00:15:34.265 Get LBA Status Capability: Not Supported 00:15:34.265 Command & Feature Lockdown Capability: Not Supported 00:15:34.265 Abort Command Limit: 1 00:15:34.265 Async Event Request Limit: 4 00:15:34.265 Number of Firmware Slots: N/A 00:15:34.265 Firmware Slot 1 Read-Only: N/A 00:15:34.265 Firmware Activation Without Reset: N/A 00:15:34.265 Multiple Update Detection Support: N/A 00:15:34.265 Firmware Update Granularity: No Information Provided 00:15:34.265 Per-Namespace SMART Log: No 00:15:34.265 Asymmetric Namespace Access Log Page: Not Supported 00:15:34.265 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:15:34.265 Command Effects Log Page: Not Supported 00:15:34.265 Get Log Page Extended Data: Supported 00:15:34.265 Telemetry Log Pages: Not Supported 00:15:34.265 Persistent Event Log Pages: Not Supported 00:15:34.265 Supported Log Pages Log Page: May Support 00:15:34.265 Commands Supported & Effects Log Page: Not Supported 00:15:34.265 Feature Identifiers & Effects Log Page:May Support 00:15:34.265 NVMe-MI Commands & Effects Log Page: May Support 00:15:34.265 Data Area 4 for Telemetry Log: Not Supported 00:15:34.265 Error Log Page Entries Supported: 128 00:15:34.265 Keep Alive: Not Supported 00:15:34.265 00:15:34.265 NVM Command Set Attributes 00:15:34.265 ========================== 00:15:34.265 Submission Queue Entry Size 00:15:34.265 Max: 1 00:15:34.265 Min: 1 00:15:34.265 Completion Queue Entry Size 00:15:34.265 Max: 1 00:15:34.265 Min: 1 00:15:34.265 Number of Namespaces: 0 00:15:34.265 Compare Command: Not Supported 00:15:34.265 Write Uncorrectable Command: Not Supported 00:15:34.265 Dataset Management Command: Not Supported 00:15:34.265 Write Zeroes Command: Not Supported 00:15:34.265 Set Features Save Field: Not Supported 00:15:34.265 Reservations: Not Supported 00:15:34.265 Timestamp: Not Supported 00:15:34.265 Copy: Not Supported 00:15:34.265 Volatile Write Cache: Not Present 00:15:34.265 Atomic Write Unit (Normal): 1 00:15:34.265 Atomic Write Unit (PFail): 1 00:15:34.265 Atomic Compare & Write Unit: 1 00:15:34.265 Fused Compare & Write: Supported 00:15:34.265 Scatter-Gather List 00:15:34.265 SGL Command Set: Supported 00:15:34.265 SGL Keyed: Supported 00:15:34.265 SGL Bit Bucket Descriptor: Not Supported 00:15:34.265 SGL Metadata Pointer: Not Supported 00:15:34.265 Oversized SGL: Not Supported 00:15:34.265 SGL Metadata Address: Not Supported 00:15:34.265 SGL Offset: Supported 00:15:34.265 Transport SGL Data Block: Not Supported 00:15:34.265 Replay Protected Memory Block: Not Supported 00:15:34.265 00:15:34.265 Firmware Slot Information 00:15:34.265 ========================= 00:15:34.265 Active slot: 0 00:15:34.265 00:15:34.265 00:15:34.265 Error Log 00:15:34.265 ========= 00:15:34.265 00:15:34.265 Active Namespaces 00:15:34.265 ================= 00:15:34.265 Discovery Log Page 00:15:34.265 ================== 00:15:34.265 Generation Counter: 2 00:15:34.265 Number of Records: 2 00:15:34.265 Record Format: 0 00:15:34.265 00:15:34.265 Discovery Log Entry 0 00:15:34.265 ---------------------- 00:15:34.265 Transport Type: 3 (TCP) 00:15:34.265 Address Family: 1 (IPv4) 00:15:34.265 Subsystem Type: 3 (Current Discovery Subsystem) 00:15:34.265 Entry Flags: 00:15:34.265 Duplicate Returned Information: 1 00:15:34.265 Explicit Persistent Connection Support for Discovery: 1 00:15:34.265 Transport Requirements: 00:15:34.265 Secure Channel: Not Required 00:15:34.265 Port ID: 0 (0x0000) 00:15:34.265 Controller ID: 65535 (0xffff) 00:15:34.265 Admin Max SQ Size: 128 00:15:34.265 Transport Service Identifier: 4420 00:15:34.266 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:15:34.266 Transport Address: 10.0.0.2 00:15:34.266 Discovery Log Entry 1 00:15:34.266 ---------------------- 00:15:34.266 Transport Type: 3 (TCP) 00:15:34.266 Address Family: 1 (IPv4) 00:15:34.266 Subsystem Type: 2 (NVM Subsystem) 00:15:34.266 Entry Flags: 00:15:34.266 Duplicate Returned Information: 0 00:15:34.266 Explicit Persistent Connection Support for Discovery: 0 00:15:34.266 Transport Requirements: 00:15:34.266 Secure Channel: Not Required 00:15:34.266 Port ID: 0 (0x0000) 00:15:34.266 Controller ID: 65535 (0xffff) 00:15:34.266 Admin Max SQ Size: 128 00:15:34.266 Transport Service Identifier: 4420 00:15:34.266 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:15:34.266 Transport Address: 10.0.0.2 [2024-12-08 05:14:23.789934] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:15:34.266 [2024-12-08 05:14:23.789953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:34.266 [2024-12-08 05:14:23.789961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:34.266 [2024-12-08 05:14:23.789969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:34.266 [2024-12-08 05:14:23.789985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:34.266 [2024-12-08 05:14:23.790003] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.266 [2024-12-08 05:14:23.790012] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.266 [2024-12-08 05:14:23.790018] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1838510) 00:15:34.266 [2024-12-08 05:14:23.790031] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.266 [2024-12-08 05:14:23.790070] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1884cc0, cid 3, qid 0 00:15:34.266 [2024-12-08 05:14:23.790124] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.266 [2024-12-08 05:14:23.790138] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.266 [2024-12-08 05:14:23.790143] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.266 [2024-12-08 05:14:23.790148] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1884cc0) on tqpair=0x1838510 00:15:34.266 [2024-12-08 05:14:23.790158] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.266 [2024-12-08 05:14:23.790162] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.266 [2024-12-08 05:14:23.790166] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1838510) 00:15:34.266 [2024-12-08 05:14:23.790175] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.266 [2024-12-08 05:14:23.790200] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1884cc0, cid 3, qid 0 00:15:34.266 [2024-12-08 05:14:23.790276] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.266 [2024-12-08 05:14:23.790283] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.266 [2024-12-08 05:14:23.790287] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.266 [2024-12-08 05:14:23.790291] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1884cc0) on tqpair=0x1838510 00:15:34.266 [2024-12-08 05:14:23.790297] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:15:34.266 [2024-12-08 05:14:23.790303] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:15:34.266 [2024-12-08 05:14:23.790313] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.266 [2024-12-08 05:14:23.790318] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.266 [2024-12-08 05:14:23.790322] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1838510) 00:15:34.266 [2024-12-08 05:14:23.790330] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.266 [2024-12-08 05:14:23.790348] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1884cc0, cid 3, qid 0 00:15:34.266 [2024-12-08 05:14:23.790401] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.266 [2024-12-08 05:14:23.790412] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.266 [2024-12-08 05:14:23.790419] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.266 [2024-12-08 05:14:23.790425] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1884cc0) on tqpair=0x1838510 00:15:34.266 [2024-12-08 05:14:23.790445] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.266 [2024-12-08 05:14:23.790454] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.266 [2024-12-08 05:14:23.790460] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1838510) 00:15:34.266 [2024-12-08 05:14:23.790471] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.266 [2024-12-08 05:14:23.790499] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1884cc0, cid 3, qid 0 00:15:34.266 [2024-12-08 05:14:23.790546] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.266 [2024-12-08 05:14:23.790559] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.266 [2024-12-08 05:14:23.790566] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.266 [2024-12-08 05:14:23.790573] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1884cc0) on tqpair=0x1838510 00:15:34.266 [2024-12-08 05:14:23.790592] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.266 [2024-12-08 05:14:23.790598] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.266 [2024-12-08 05:14:23.790602] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1838510) 00:15:34.266 [2024-12-08 05:14:23.790610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.266 [2024-12-08 05:14:23.790633] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1884cc0, cid 3, qid 0 00:15:34.266 [2024-12-08 05:14:23.794704] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.266 [2024-12-08 05:14:23.794732] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.266 [2024-12-08 05:14:23.794738] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.266 [2024-12-08 05:14:23.794743] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1884cc0) on tqpair=0x1838510 00:15:34.266 [2024-12-08 05:14:23.794761] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.266 [2024-12-08 05:14:23.794766] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.266 [2024-12-08 05:14:23.794770] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1838510) 00:15:34.266 [2024-12-08 05:14:23.794781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.266 [2024-12-08 05:14:23.794811] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1884cc0, cid 3, qid 0 00:15:34.266 [2024-12-08 05:14:23.794877] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.266 [2024-12-08 05:14:23.794884] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.266 [2024-12-08 05:14:23.794888] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.266 [2024-12-08 05:14:23.794892] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1884cc0) on tqpair=0x1838510 00:15:34.266 [2024-12-08 05:14:23.794902] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:15:34.266 00:15:34.266 05:14:23 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:15:34.266 [2024-12-08 05:14:23.833843] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:34.266 [2024-12-08 05:14:23.833916] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80604 ] 00:15:34.266 [2024-12-08 05:14:23.977189] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:15:34.266 [2024-12-08 05:14:23.977272] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:15:34.266 [2024-12-08 05:14:23.977280] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:15:34.266 [2024-12-08 05:14:23.977295] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:15:34.266 [2024-12-08 05:14:23.977310] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:15:34.266 [2024-12-08 05:14:23.977464] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:15:34.266 [2024-12-08 05:14:23.977540] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1d8b510 0 00:15:34.266 [2024-12-08 05:14:23.989705] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:15:34.266 [2024-12-08 05:14:23.989749] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:15:34.266 [2024-12-08 05:14:23.989756] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:15:34.266 [2024-12-08 05:14:23.989760] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:15:34.266 [2024-12-08 05:14:23.989816] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.266 [2024-12-08 05:14:23.989824] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.266 [2024-12-08 05:14:23.989829] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d8b510) 00:15:34.266 [2024-12-08 05:14:23.989848] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:34.266 [2024-12-08 05:14:23.989894] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd78a0, cid 0, qid 0 00:15:34.266 [2024-12-08 05:14:23.997691] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.266 [2024-12-08 05:14:23.997716] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.266 [2024-12-08 05:14:23.997722] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.266 [2024-12-08 05:14:23.997728] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd78a0) on tqpair=0x1d8b510 00:15:34.266 [2024-12-08 05:14:23.997741] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:15:34.266 [2024-12-08 05:14:23.997750] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:15:34.267 [2024-12-08 05:14:23.997758] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:15:34.267 [2024-12-08 05:14:23.997776] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.267 [2024-12-08 05:14:23.997781] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.267 [2024-12-08 05:14:23.997786] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d8b510) 00:15:34.267 [2024-12-08 05:14:23.997796] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.267 [2024-12-08 05:14:23.997827] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd78a0, cid 0, qid 0 00:15:34.267 [2024-12-08 05:14:23.997893] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.267 [2024-12-08 05:14:23.997907] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.267 [2024-12-08 05:14:23.997914] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.267 [2024-12-08 05:14:23.997921] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd78a0) on tqpair=0x1d8b510 00:15:34.267 [2024-12-08 05:14:23.997930] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:15:34.267 [2024-12-08 05:14:23.997940] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:15:34.267 [2024-12-08 05:14:23.997949] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.267 [2024-12-08 05:14:23.997953] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.267 [2024-12-08 05:14:23.997957] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d8b510) 00:15:34.267 [2024-12-08 05:14:23.997967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.267 [2024-12-08 05:14:23.997995] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd78a0, cid 0, qid 0 00:15:34.267 [2024-12-08 05:14:23.998045] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.267 [2024-12-08 05:14:23.998054] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.267 [2024-12-08 05:14:23.998058] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.267 [2024-12-08 05:14:23.998063] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd78a0) on tqpair=0x1d8b510 00:15:34.267 [2024-12-08 05:14:23.998070] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:15:34.267 [2024-12-08 05:14:23.998080] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:15:34.267 [2024-12-08 05:14:23.998090] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.267 [2024-12-08 05:14:23.998096] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.267 [2024-12-08 05:14:23.998103] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d8b510) 00:15:34.267 [2024-12-08 05:14:23.998115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.267 [2024-12-08 05:14:23.998148] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd78a0, cid 0, qid 0 00:15:34.267 [2024-12-08 05:14:23.998195] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.267 [2024-12-08 05:14:23.998208] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.267 [2024-12-08 05:14:23.998215] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.267 [2024-12-08 05:14:23.998223] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd78a0) on tqpair=0x1d8b510 00:15:34.267 [2024-12-08 05:14:23.998235] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:34.267 [2024-12-08 05:14:23.998249] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.267 [2024-12-08 05:14:23.998254] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.267 [2024-12-08 05:14:23.998258] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d8b510) 00:15:34.267 [2024-12-08 05:14:23.998267] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.267 [2024-12-08 05:14:23.998289] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd78a0, cid 0, qid 0 00:15:34.267 [2024-12-08 05:14:23.998337] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.267 [2024-12-08 05:14:23.998351] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.267 [2024-12-08 05:14:23.998356] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.267 [2024-12-08 05:14:23.998363] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd78a0) on tqpair=0x1d8b510 00:15:34.267 [2024-12-08 05:14:23.998372] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:15:34.267 [2024-12-08 05:14:23.998381] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:15:34.267 [2024-12-08 05:14:23.998396] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:34.267 [2024-12-08 05:14:23.998504] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:15:34.267 [2024-12-08 05:14:23.998508] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:34.267 [2024-12-08 05:14:23.998532] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.267 [2024-12-08 05:14:23.998540] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.267 [2024-12-08 05:14:23.998546] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d8b510) 00:15:34.267 [2024-12-08 05:14:23.998558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.267 [2024-12-08 05:14:23.998590] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd78a0, cid 0, qid 0 00:15:34.267 [2024-12-08 05:14:23.998639] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.267 [2024-12-08 05:14:23.998652] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.267 [2024-12-08 05:14:23.998659] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.267 [2024-12-08 05:14:23.998664] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd78a0) on tqpair=0x1d8b510 00:15:34.267 [2024-12-08 05:14:23.998685] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:34.267 [2024-12-08 05:14:23.998704] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.267 [2024-12-08 05:14:23.998713] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.267 [2024-12-08 05:14:23.998720] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d8b510) 00:15:34.267 [2024-12-08 05:14:23.998732] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.267 [2024-12-08 05:14:23.998759] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd78a0, cid 0, qid 0 00:15:34.267 [2024-12-08 05:14:23.998811] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.267 [2024-12-08 05:14:23.998819] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.267 [2024-12-08 05:14:23.998822] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.267 [2024-12-08 05:14:23.998827] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd78a0) on tqpair=0x1d8b510 00:15:34.267 [2024-12-08 05:14:23.998833] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:34.267 [2024-12-08 05:14:23.998839] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:15:34.267 [2024-12-08 05:14:23.998849] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:15:34.267 [2024-12-08 05:14:23.998875] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:15:34.267 [2024-12-08 05:14:23.998893] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.267 [2024-12-08 05:14:23.998900] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.267 [2024-12-08 05:14:23.998905] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d8b510) 00:15:34.267 [2024-12-08 05:14:23.998913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.267 [2024-12-08 05:14:23.998937] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd78a0, cid 0, qid 0 00:15:34.267 [2024-12-08 05:14:23.999030] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:34.267 [2024-12-08 05:14:23.999047] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:34.267 [2024-12-08 05:14:23.999052] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:34.267 [2024-12-08 05:14:23.999056] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d8b510): datao=0, datal=4096, cccid=0 00:15:34.267 [2024-12-08 05:14:23.999069] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dd78a0) on tqpair(0x1d8b510): expected_datao=0, payload_size=4096 00:15:34.267 [2024-12-08 05:14:23.999079] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:34.267 [2024-12-08 05:14:23.999084] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:34.267 [2024-12-08 05:14:23.999093] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.267 [2024-12-08 05:14:23.999099] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.267 [2024-12-08 05:14:23.999103] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.268 [2024-12-08 05:14:23.999108] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd78a0) on tqpair=0x1d8b510 00:15:34.268 [2024-12-08 05:14:23.999119] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:15:34.268 [2024-12-08 05:14:23.999125] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:15:34.268 [2024-12-08 05:14:23.999129] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:15:34.268 [2024-12-08 05:14:23.999135] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:15:34.268 [2024-12-08 05:14:23.999140] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:15:34.268 [2024-12-08 05:14:23.999145] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:15:34.268 [2024-12-08 05:14:23.999162] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:15:34.268 [2024-12-08 05:14:23.999175] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.268 [2024-12-08 05:14:23.999183] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.268 [2024-12-08 05:14:23.999190] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d8b510) 00:15:34.268 [2024-12-08 05:14:23.999203] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:34.268 [2024-12-08 05:14:23.999234] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd78a0, cid 0, qid 0 00:15:34.268 [2024-12-08 05:14:23.999285] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.268 [2024-12-08 05:14:23.999296] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.268 [2024-12-08 05:14:23.999302] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.268 [2024-12-08 05:14:23.999310] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd78a0) on tqpair=0x1d8b510 00:15:34.268 [2024-12-08 05:14:23.999324] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.268 [2024-12-08 05:14:23.999332] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.268 [2024-12-08 05:14:23.999339] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d8b510) 00:15:34.268 [2024-12-08 05:14:23.999351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:34.268 [2024-12-08 05:14:23.999359] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.268 [2024-12-08 05:14:23.999363] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.268 [2024-12-08 05:14:23.999367] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1d8b510) 00:15:34.268 [2024-12-08 05:14:23.999373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:34.268 [2024-12-08 05:14:23.999380] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.268 [2024-12-08 05:14:23.999384] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.268 [2024-12-08 05:14:23.999388] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1d8b510) 00:15:34.268 [2024-12-08 05:14:23.999394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:34.268 [2024-12-08 05:14:23.999401] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.268 [2024-12-08 05:14:23.999405] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.268 [2024-12-08 05:14:23.999409] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d8b510) 00:15:34.268 [2024-12-08 05:14:23.999427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:34.268 [2024-12-08 05:14:23.999437] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:34.268 [2024-12-08 05:14:23.999454] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:34.268 [2024-12-08 05:14:23.999463] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.268 [2024-12-08 05:14:23.999467] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.268 [2024-12-08 05:14:23.999471] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d8b510) 00:15:34.268 [2024-12-08 05:14:23.999479] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.268 [2024-12-08 05:14:23.999507] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd78a0, cid 0, qid 0 00:15:34.268 [2024-12-08 05:14:23.999518] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7a00, cid 1, qid 0 00:15:34.268 [2024-12-08 05:14:23.999527] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7b60, cid 2, qid 0 00:15:34.268 [2024-12-08 05:14:23.999535] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7cc0, cid 3, qid 0 00:15:34.268 [2024-12-08 05:14:23.999544] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7e20, cid 4, qid 0 00:15:34.268 [2024-12-08 05:14:23.999622] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.268 [2024-12-08 05:14:23.999642] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.268 [2024-12-08 05:14:23.999647] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.268 [2024-12-08 05:14:23.999652] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd7e20) on tqpair=0x1d8b510 00:15:34.268 [2024-12-08 05:14:23.999659] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:15:34.268 [2024-12-08 05:14:23.999666] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:34.268 [2024-12-08 05:14:23.999696] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:15:34.268 [2024-12-08 05:14:23.999711] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:34.268 [2024-12-08 05:14:23.999720] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.268 [2024-12-08 05:14:23.999725] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.268 [2024-12-08 05:14:23.999729] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d8b510) 00:15:34.268 [2024-12-08 05:14:23.999737] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:34.268 [2024-12-08 05:14:23.999763] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7e20, cid 4, qid 0 00:15:34.268 [2024-12-08 05:14:23.999827] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.268 [2024-12-08 05:14:23.999842] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.268 [2024-12-08 05:14:23.999848] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.268 [2024-12-08 05:14:23.999856] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd7e20) on tqpair=0x1d8b510 00:15:34.268 [2024-12-08 05:14:23.999928] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:15:34.268 [2024-12-08 05:14:23.999942] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:34.268 [2024-12-08 05:14:23.999951] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.268 [2024-12-08 05:14:23.999955] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.268 [2024-12-08 05:14:23.999959] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d8b510) 00:15:34.268 [2024-12-08 05:14:23.999968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.268 [2024-12-08 05:14:23.999991] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7e20, cid 4, qid 0 00:15:34.268 [2024-12-08 05:14:24.000052] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:34.268 [2024-12-08 05:14:24.000068] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:34.268 [2024-12-08 05:14:24.000073] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:34.268 [2024-12-08 05:14:24.000078] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d8b510): datao=0, datal=4096, cccid=4 00:15:34.268 [2024-12-08 05:14:24.000083] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dd7e20) on tqpair(0x1d8b510): expected_datao=0, payload_size=4096 00:15:34.268 [2024-12-08 05:14:24.000092] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:34.268 [2024-12-08 05:14:24.000097] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:34.268 [2024-12-08 05:14:24.000106] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.268 [2024-12-08 05:14:24.000113] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.268 [2024-12-08 05:14:24.000117] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.268 [2024-12-08 05:14:24.000121] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd7e20) on tqpair=0x1d8b510 00:15:34.268 [2024-12-08 05:14:24.000139] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:15:34.268 [2024-12-08 05:14:24.000151] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:15:34.268 [2024-12-08 05:14:24.000162] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:15:34.268 [2024-12-08 05:14:24.000171] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.268 [2024-12-08 05:14:24.000176] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.268 [2024-12-08 05:14:24.000180] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d8b510) 00:15:34.268 [2024-12-08 05:14:24.000188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.268 [2024-12-08 05:14:24.000210] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7e20, cid 4, qid 0 00:15:34.268 [2024-12-08 05:14:24.000284] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:34.268 [2024-12-08 05:14:24.000299] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:34.268 [2024-12-08 05:14:24.000304] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:34.268 [2024-12-08 05:14:24.000308] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d8b510): datao=0, datal=4096, cccid=4 00:15:34.268 [2024-12-08 05:14:24.000314] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dd7e20) on tqpair(0x1d8b510): expected_datao=0, payload_size=4096 00:15:34.268 [2024-12-08 05:14:24.000322] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:34.268 [2024-12-08 05:14:24.000326] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:34.268 [2024-12-08 05:14:24.000335] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.268 [2024-12-08 05:14:24.000342] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.268 [2024-12-08 05:14:24.000346] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.268 [2024-12-08 05:14:24.000350] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd7e20) on tqpair=0x1d8b510 00:15:34.269 [2024-12-08 05:14:24.000368] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:34.269 [2024-12-08 05:14:24.000380] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:34.269 [2024-12-08 05:14:24.000389] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.269 [2024-12-08 05:14:24.000393] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.269 [2024-12-08 05:14:24.000397] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d8b510) 00:15:34.269 [2024-12-08 05:14:24.000405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.269 [2024-12-08 05:14:24.000427] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7e20, cid 4, qid 0 00:15:34.269 [2024-12-08 05:14:24.000493] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:34.269 [2024-12-08 05:14:24.000506] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:34.269 [2024-12-08 05:14:24.000513] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:34.269 [2024-12-08 05:14:24.000520] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d8b510): datao=0, datal=4096, cccid=4 00:15:34.269 [2024-12-08 05:14:24.000529] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dd7e20) on tqpair(0x1d8b510): expected_datao=0, payload_size=4096 00:15:34.269 [2024-12-08 05:14:24.000542] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:34.269 [2024-12-08 05:14:24.000550] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:34.269 [2024-12-08 05:14:24.000564] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.269 [2024-12-08 05:14:24.000573] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.269 [2024-12-08 05:14:24.000580] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.269 [2024-12-08 05:14:24.000588] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd7e20) on tqpair=0x1d8b510 00:15:34.269 [2024-12-08 05:14:24.000602] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:34.269 [2024-12-08 05:14:24.000613] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:15:34.269 [2024-12-08 05:14:24.000625] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:15:34.269 [2024-12-08 05:14:24.000633] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:34.269 [2024-12-08 05:14:24.000639] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:15:34.269 [2024-12-08 05:14:24.000645] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:15:34.269 [2024-12-08 05:14:24.000651] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:15:34.269 [2024-12-08 05:14:24.000657] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:15:34.269 [2024-12-08 05:14:24.000693] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.269 [2024-12-08 05:14:24.000700] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.269 [2024-12-08 05:14:24.000704] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d8b510) 00:15:34.269 [2024-12-08 05:14:24.000713] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.269 [2024-12-08 05:14:24.000721] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.269 [2024-12-08 05:14:24.000725] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.269 [2024-12-08 05:14:24.000729] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d8b510) 00:15:34.269 [2024-12-08 05:14:24.000736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:15:34.269 [2024-12-08 05:14:24.000765] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7e20, cid 4, qid 0 00:15:34.269 [2024-12-08 05:14:24.000773] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7f80, cid 5, qid 0 00:15:34.269 [2024-12-08 05:14:24.000847] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.269 [2024-12-08 05:14:24.000860] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.269 [2024-12-08 05:14:24.000866] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.269 [2024-12-08 05:14:24.000874] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd7e20) on tqpair=0x1d8b510 00:15:34.269 [2024-12-08 05:14:24.000886] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.269 [2024-12-08 05:14:24.000897] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.269 [2024-12-08 05:14:24.000904] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.269 [2024-12-08 05:14:24.000911] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd7f80) on tqpair=0x1d8b510 00:15:34.269 [2024-12-08 05:14:24.000928] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.269 [2024-12-08 05:14:24.000936] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.269 [2024-12-08 05:14:24.000943] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d8b510) 00:15:34.269 [2024-12-08 05:14:24.000951] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.269 [2024-12-08 05:14:24.000975] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7f80, cid 5, qid 0 00:15:34.269 [2024-12-08 05:14:24.001023] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.269 [2024-12-08 05:14:24.001037] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.269 [2024-12-08 05:14:24.001041] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.269 [2024-12-08 05:14:24.001046] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd7f80) on tqpair=0x1d8b510 00:15:34.269 [2024-12-08 05:14:24.001059] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.269 [2024-12-08 05:14:24.001064] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.269 [2024-12-08 05:14:24.001068] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d8b510) 00:15:34.269 [2024-12-08 05:14:24.001075] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.269 [2024-12-08 05:14:24.001095] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7f80, cid 5, qid 0 00:15:34.269 [2024-12-08 05:14:24.001146] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.269 [2024-12-08 05:14:24.001160] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.269 [2024-12-08 05:14:24.001167] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.269 [2024-12-08 05:14:24.001174] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd7f80) on tqpair=0x1d8b510 00:15:34.269 [2024-12-08 05:14:24.001193] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.269 [2024-12-08 05:14:24.001201] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.269 [2024-12-08 05:14:24.001208] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d8b510) 00:15:34.269 [2024-12-08 05:14:24.001219] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.269 [2024-12-08 05:14:24.001242] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7f80, cid 5, qid 0 00:15:34.269 [2024-12-08 05:14:24.001290] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.269 [2024-12-08 05:14:24.001303] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.269 [2024-12-08 05:14:24.001308] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.269 [2024-12-08 05:14:24.001312] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd7f80) on tqpair=0x1d8b510 00:15:34.269 [2024-12-08 05:14:24.001328] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.269 [2024-12-08 05:14:24.001334] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.269 [2024-12-08 05:14:24.001338] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d8b510) 00:15:34.269 [2024-12-08 05:14:24.001346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.269 [2024-12-08 05:14:24.001354] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.269 [2024-12-08 05:14:24.001358] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.269 [2024-12-08 05:14:24.001362] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d8b510) 00:15:34.269 [2024-12-08 05:14:24.001369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.269 [2024-12-08 05:14:24.001376] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.269 [2024-12-08 05:14:24.001381] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.269 [2024-12-08 05:14:24.001385] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1d8b510) 00:15:34.269 [2024-12-08 05:14:24.001391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.269 [2024-12-08 05:14:24.001401] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.269 [2024-12-08 05:14:24.001408] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.269 [2024-12-08 05:14:24.001415] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1d8b510) 00:15:34.269 [2024-12-08 05:14:24.001426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.269 [2024-12-08 05:14:24.001459] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7f80, cid 5, qid 0 00:15:34.269 [2024-12-08 05:14:24.001473] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7e20, cid 4, qid 0 00:15:34.269 [2024-12-08 05:14:24.001482] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd80e0, cid 6, qid 0 00:15:34.269 [2024-12-08 05:14:24.001490] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd8240, cid 7, qid 0 00:15:34.269 [2024-12-08 05:14:24.001610] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:34.269 [2024-12-08 05:14:24.001625] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:34.269 [2024-12-08 05:14:24.001632] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:34.269 [2024-12-08 05:14:24.001640] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d8b510): datao=0, datal=8192, cccid=5 00:15:34.269 [2024-12-08 05:14:24.001647] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dd7f80) on tqpair(0x1d8b510): expected_datao=0, payload_size=8192 00:15:34.269 [2024-12-08 05:14:24.001667] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:34.269 [2024-12-08 05:14:24.005689] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:34.269 [2024-12-08 05:14:24.005705] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:34.269 [2024-12-08 05:14:24.005712] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:34.269 [2024-12-08 05:14:24.005716] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:34.270 [2024-12-08 05:14:24.005720] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d8b510): datao=0, datal=512, cccid=4 00:15:34.270 [2024-12-08 05:14:24.005725] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dd7e20) on tqpair(0x1d8b510): expected_datao=0, payload_size=512 00:15:34.270 [2024-12-08 05:14:24.005733] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:34.270 [2024-12-08 05:14:24.005738] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:34.270 [2024-12-08 05:14:24.005744] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:34.270 [2024-12-08 05:14:24.005750] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:34.270 [2024-12-08 05:14:24.005754] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:34.270 [2024-12-08 05:14:24.005758] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d8b510): datao=0, datal=512, cccid=6 00:15:34.270 [2024-12-08 05:14:24.005762] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dd80e0) on tqpair(0x1d8b510): expected_datao=0, payload_size=512 00:15:34.270 [2024-12-08 05:14:24.005770] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:34.270 [2024-12-08 05:14:24.005774] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:34.270 [2024-12-08 05:14:24.005780] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:34.270 [2024-12-08 05:14:24.005786] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:34.270 [2024-12-08 05:14:24.005790] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:34.270 [2024-12-08 05:14:24.005794] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d8b510): datao=0, datal=4096, cccid=7 00:15:34.270 [2024-12-08 05:14:24.005798] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dd8240) on tqpair(0x1d8b510): expected_datao=0, payload_size=4096 00:15:34.270 [2024-12-08 05:14:24.005810] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:34.270 [2024-12-08 05:14:24.005817] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:34.270 [2024-12-08 05:14:24.005825] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.270 [2024-12-08 05:14:24.005835] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.270 [2024-12-08 05:14:24.005842] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.270 [2024-12-08 05:14:24.005849] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd7f80) on tqpair=0x1d8b510 00:15:34.270 [2024-12-08 05:14:24.005874] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.270 [2024-12-08 05:14:24.005882] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.270 [2024-12-08 05:14:24.005886] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.270 [2024-12-08 05:14:24.005890] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd7e20) on tqpair=0x1d8b510 00:15:34.270 [2024-12-08 05:14:24.005904] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.270 [2024-12-08 05:14:24.005911] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.270 [2024-12-08 05:14:24.005915] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.270 [2024-12-08 05:14:24.005919] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd80e0) on tqpair=0x1d8b510 00:15:34.270 [2024-12-08 05:14:24.005930] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.270 [2024-12-08 05:14:24.005940] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.270 [2024-12-08 05:14:24.005946] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.270 [2024-12-08 05:14:24.005953] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd8240) on tqpair=0x1d8b510 00:15:34.270 ===================================================== 00:15:34.270 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:34.270 ===================================================== 00:15:34.270 Controller Capabilities/Features 00:15:34.270 ================================ 00:15:34.270 Vendor ID: 8086 00:15:34.270 Subsystem Vendor ID: 8086 00:15:34.270 Serial Number: SPDK00000000000001 00:15:34.270 Model Number: SPDK bdev Controller 00:15:34.270 Firmware Version: 24.01.1 00:15:34.270 Recommended Arb Burst: 6 00:15:34.270 IEEE OUI Identifier: e4 d2 5c 00:15:34.270 Multi-path I/O 00:15:34.270 May have multiple subsystem ports: Yes 00:15:34.270 May have multiple controllers: Yes 00:15:34.270 Associated with SR-IOV VF: No 00:15:34.270 Max Data Transfer Size: 131072 00:15:34.270 Max Number of Namespaces: 32 00:15:34.270 Max Number of I/O Queues: 127 00:15:34.270 NVMe Specification Version (VS): 1.3 00:15:34.270 NVMe Specification Version (Identify): 1.3 00:15:34.270 Maximum Queue Entries: 128 00:15:34.270 Contiguous Queues Required: Yes 00:15:34.270 Arbitration Mechanisms Supported 00:15:34.270 Weighted Round Robin: Not Supported 00:15:34.270 Vendor Specific: Not Supported 00:15:34.270 Reset Timeout: 15000 ms 00:15:34.270 Doorbell Stride: 4 bytes 00:15:34.270 NVM Subsystem Reset: Not Supported 00:15:34.270 Command Sets Supported 00:15:34.270 NVM Command Set: Supported 00:15:34.270 Boot Partition: Not Supported 00:15:34.270 Memory Page Size Minimum: 4096 bytes 00:15:34.270 Memory Page Size Maximum: 4096 bytes 00:15:34.270 Persistent Memory Region: Not Supported 00:15:34.270 Optional Asynchronous Events Supported 00:15:34.270 Namespace Attribute Notices: Supported 00:15:34.270 Firmware Activation Notices: Not Supported 00:15:34.270 ANA Change Notices: Not Supported 00:15:34.270 PLE Aggregate Log Change Notices: Not Supported 00:15:34.270 LBA Status Info Alert Notices: Not Supported 00:15:34.270 EGE Aggregate Log Change Notices: Not Supported 00:15:34.270 Normal NVM Subsystem Shutdown event: Not Supported 00:15:34.270 Zone Descriptor Change Notices: Not Supported 00:15:34.270 Discovery Log Change Notices: Not Supported 00:15:34.270 Controller Attributes 00:15:34.270 128-bit Host Identifier: Supported 00:15:34.270 Non-Operational Permissive Mode: Not Supported 00:15:34.270 NVM Sets: Not Supported 00:15:34.270 Read Recovery Levels: Not Supported 00:15:34.270 Endurance Groups: Not Supported 00:15:34.270 Predictable Latency Mode: Not Supported 00:15:34.270 Traffic Based Keep ALive: Not Supported 00:15:34.270 Namespace Granularity: Not Supported 00:15:34.270 SQ Associations: Not Supported 00:15:34.270 UUID List: Not Supported 00:15:34.270 Multi-Domain Subsystem: Not Supported 00:15:34.270 Fixed Capacity Management: Not Supported 00:15:34.270 Variable Capacity Management: Not Supported 00:15:34.270 Delete Endurance Group: Not Supported 00:15:34.270 Delete NVM Set: Not Supported 00:15:34.270 Extended LBA Formats Supported: Not Supported 00:15:34.270 Flexible Data Placement Supported: Not Supported 00:15:34.270 00:15:34.270 Controller Memory Buffer Support 00:15:34.270 ================================ 00:15:34.270 Supported: No 00:15:34.270 00:15:34.270 Persistent Memory Region Support 00:15:34.270 ================================ 00:15:34.270 Supported: No 00:15:34.270 00:15:34.270 Admin Command Set Attributes 00:15:34.270 ============================ 00:15:34.270 Security Send/Receive: Not Supported 00:15:34.270 Format NVM: Not Supported 00:15:34.270 Firmware Activate/Download: Not Supported 00:15:34.270 Namespace Management: Not Supported 00:15:34.270 Device Self-Test: Not Supported 00:15:34.270 Directives: Not Supported 00:15:34.270 NVMe-MI: Not Supported 00:15:34.270 Virtualization Management: Not Supported 00:15:34.270 Doorbell Buffer Config: Not Supported 00:15:34.270 Get LBA Status Capability: Not Supported 00:15:34.270 Command & Feature Lockdown Capability: Not Supported 00:15:34.270 Abort Command Limit: 4 00:15:34.270 Async Event Request Limit: 4 00:15:34.270 Number of Firmware Slots: N/A 00:15:34.270 Firmware Slot 1 Read-Only: N/A 00:15:34.270 Firmware Activation Without Reset: N/A 00:15:34.270 Multiple Update Detection Support: N/A 00:15:34.270 Firmware Update Granularity: No Information Provided 00:15:34.270 Per-Namespace SMART Log: No 00:15:34.270 Asymmetric Namespace Access Log Page: Not Supported 00:15:34.270 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:15:34.270 Command Effects Log Page: Supported 00:15:34.270 Get Log Page Extended Data: Supported 00:15:34.270 Telemetry Log Pages: Not Supported 00:15:34.270 Persistent Event Log Pages: Not Supported 00:15:34.270 Supported Log Pages Log Page: May Support 00:15:34.270 Commands Supported & Effects Log Page: Not Supported 00:15:34.270 Feature Identifiers & Effects Log Page:May Support 00:15:34.270 NVMe-MI Commands & Effects Log Page: May Support 00:15:34.270 Data Area 4 for Telemetry Log: Not Supported 00:15:34.270 Error Log Page Entries Supported: 128 00:15:34.270 Keep Alive: Supported 00:15:34.270 Keep Alive Granularity: 10000 ms 00:15:34.270 00:15:34.270 NVM Command Set Attributes 00:15:34.270 ========================== 00:15:34.270 Submission Queue Entry Size 00:15:34.270 Max: 64 00:15:34.270 Min: 64 00:15:34.270 Completion Queue Entry Size 00:15:34.270 Max: 16 00:15:34.270 Min: 16 00:15:34.270 Number of Namespaces: 32 00:15:34.270 Compare Command: Supported 00:15:34.270 Write Uncorrectable Command: Not Supported 00:15:34.270 Dataset Management Command: Supported 00:15:34.270 Write Zeroes Command: Supported 00:15:34.270 Set Features Save Field: Not Supported 00:15:34.270 Reservations: Supported 00:15:34.270 Timestamp: Not Supported 00:15:34.270 Copy: Supported 00:15:34.270 Volatile Write Cache: Present 00:15:34.270 Atomic Write Unit (Normal): 1 00:15:34.270 Atomic Write Unit (PFail): 1 00:15:34.270 Atomic Compare & Write Unit: 1 00:15:34.270 Fused Compare & Write: Supported 00:15:34.270 Scatter-Gather List 00:15:34.270 SGL Command Set: Supported 00:15:34.271 SGL Keyed: Supported 00:15:34.271 SGL Bit Bucket Descriptor: Not Supported 00:15:34.271 SGL Metadata Pointer: Not Supported 00:15:34.271 Oversized SGL: Not Supported 00:15:34.271 SGL Metadata Address: Not Supported 00:15:34.271 SGL Offset: Supported 00:15:34.271 Transport SGL Data Block: Not Supported 00:15:34.271 Replay Protected Memory Block: Not Supported 00:15:34.271 00:15:34.271 Firmware Slot Information 00:15:34.271 ========================= 00:15:34.271 Active slot: 1 00:15:34.271 Slot 1 Firmware Revision: 24.01.1 00:15:34.271 00:15:34.271 00:15:34.271 Commands Supported and Effects 00:15:34.271 ============================== 00:15:34.271 Admin Commands 00:15:34.271 -------------- 00:15:34.271 Get Log Page (02h): Supported 00:15:34.271 Identify (06h): Supported 00:15:34.271 Abort (08h): Supported 00:15:34.271 Set Features (09h): Supported 00:15:34.271 Get Features (0Ah): Supported 00:15:34.271 Asynchronous Event Request (0Ch): Supported 00:15:34.271 Keep Alive (18h): Supported 00:15:34.271 I/O Commands 00:15:34.271 ------------ 00:15:34.271 Flush (00h): Supported LBA-Change 00:15:34.271 Write (01h): Supported LBA-Change 00:15:34.271 Read (02h): Supported 00:15:34.271 Compare (05h): Supported 00:15:34.271 Write Zeroes (08h): Supported LBA-Change 00:15:34.271 Dataset Management (09h): Supported LBA-Change 00:15:34.271 Copy (19h): Supported LBA-Change 00:15:34.271 Unknown (79h): Supported LBA-Change 00:15:34.271 Unknown (7Ah): Supported 00:15:34.271 00:15:34.271 Error Log 00:15:34.271 ========= 00:15:34.271 00:15:34.271 Arbitration 00:15:34.271 =========== 00:15:34.271 Arbitration Burst: 1 00:15:34.271 00:15:34.271 Power Management 00:15:34.271 ================ 00:15:34.271 Number of Power States: 1 00:15:34.271 Current Power State: Power State #0 00:15:34.271 Power State #0: 00:15:34.271 Max Power: 0.00 W 00:15:34.271 Non-Operational State: Operational 00:15:34.271 Entry Latency: Not Reported 00:15:34.271 Exit Latency: Not Reported 00:15:34.271 Relative Read Throughput: 0 00:15:34.271 Relative Read Latency: 0 00:15:34.271 Relative Write Throughput: 0 00:15:34.271 Relative Write Latency: 0 00:15:34.271 Idle Power: Not Reported 00:15:34.271 Active Power: Not Reported 00:15:34.271 Non-Operational Permissive Mode: Not Supported 00:15:34.271 00:15:34.271 Health Information 00:15:34.271 ================== 00:15:34.271 Critical Warnings: 00:15:34.271 Available Spare Space: OK 00:15:34.271 Temperature: OK 00:15:34.271 Device Reliability: OK 00:15:34.271 Read Only: No 00:15:34.271 Volatile Memory Backup: OK 00:15:34.271 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:34.271 Temperature Threshold: [2024-12-08 05:14:24.006104] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.271 [2024-12-08 05:14:24.006117] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.271 [2024-12-08 05:14:24.006124] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1d8b510) 00:15:34.271 [2024-12-08 05:14:24.006136] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.271 [2024-12-08 05:14:24.006168] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd8240, cid 7, qid 0 00:15:34.271 [2024-12-08 05:14:24.006225] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.271 [2024-12-08 05:14:24.006239] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.271 [2024-12-08 05:14:24.006246] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.271 [2024-12-08 05:14:24.006253] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd8240) on tqpair=0x1d8b510 00:15:34.271 [2024-12-08 05:14:24.006307] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:15:34.271 [2024-12-08 05:14:24.006326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:34.271 [2024-12-08 05:14:24.006334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:34.271 [2024-12-08 05:14:24.006340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:34.271 [2024-12-08 05:14:24.006347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:34.271 [2024-12-08 05:14:24.006357] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.271 [2024-12-08 05:14:24.006362] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.271 [2024-12-08 05:14:24.006366] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d8b510) 00:15:34.271 [2024-12-08 05:14:24.006375] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.271 [2024-12-08 05:14:24.006402] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7cc0, cid 3, qid 0 00:15:34.271 [2024-12-08 05:14:24.006449] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.271 [2024-12-08 05:14:24.006462] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.271 [2024-12-08 05:14:24.006468] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.271 [2024-12-08 05:14:24.006472] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd7cc0) on tqpair=0x1d8b510 00:15:34.271 [2024-12-08 05:14:24.006483] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.271 [2024-12-08 05:14:24.006490] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.271 [2024-12-08 05:14:24.006496] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d8b510) 00:15:34.271 [2024-12-08 05:14:24.006508] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.271 [2024-12-08 05:14:24.006542] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7cc0, cid 3, qid 0 00:15:34.271 [2024-12-08 05:14:24.006614] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.271 [2024-12-08 05:14:24.006623] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.271 [2024-12-08 05:14:24.006627] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.271 [2024-12-08 05:14:24.006632] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd7cc0) on tqpair=0x1d8b510 00:15:34.271 [2024-12-08 05:14:24.006638] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:15:34.271 [2024-12-08 05:14:24.006644] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:15:34.271 [2024-12-08 05:14:24.006655] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.271 [2024-12-08 05:14:24.006660] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.271 [2024-12-08 05:14:24.006664] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d8b510) 00:15:34.271 [2024-12-08 05:14:24.006688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.271 [2024-12-08 05:14:24.006714] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7cc0, cid 3, qid 0 00:15:34.271 [2024-12-08 05:14:24.006769] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.271 [2024-12-08 05:14:24.006776] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.271 [2024-12-08 05:14:24.006783] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.271 [2024-12-08 05:14:24.006789] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd7cc0) on tqpair=0x1d8b510 00:15:34.271 [2024-12-08 05:14:24.006809] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.271 [2024-12-08 05:14:24.006818] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.271 [2024-12-08 05:14:24.006825] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d8b510) 00:15:34.271 [2024-12-08 05:14:24.006837] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.271 [2024-12-08 05:14:24.006869] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7cc0, cid 3, qid 0 00:15:34.271 [2024-12-08 05:14:24.006923] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.271 [2024-12-08 05:14:24.006931] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.271 [2024-12-08 05:14:24.006935] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.271 [2024-12-08 05:14:24.006940] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd7cc0) on tqpair=0x1d8b510 00:15:34.271 [2024-12-08 05:14:24.006953] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.271 [2024-12-08 05:14:24.006958] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.271 [2024-12-08 05:14:24.006962] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d8b510) 00:15:34.271 [2024-12-08 05:14:24.006970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.271 [2024-12-08 05:14:24.006990] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7cc0, cid 3, qid 0 00:15:34.271 [2024-12-08 05:14:24.007048] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.271 [2024-12-08 05:14:24.007061] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.271 [2024-12-08 05:14:24.007068] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.271 [2024-12-08 05:14:24.007075] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd7cc0) on tqpair=0x1d8b510 00:15:34.271 [2024-12-08 05:14:24.007089] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.271 [2024-12-08 05:14:24.007094] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.271 [2024-12-08 05:14:24.007098] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d8b510) 00:15:34.271 [2024-12-08 05:14:24.007106] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.271 [2024-12-08 05:14:24.007127] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7cc0, cid 3, qid 0 00:15:34.271 [2024-12-08 05:14:24.007173] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.272 [2024-12-08 05:14:24.007184] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.272 [2024-12-08 05:14:24.007190] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.272 [2024-12-08 05:14:24.007198] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd7cc0) on tqpair=0x1d8b510 00:15:34.272 [2024-12-08 05:14:24.007216] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.272 [2024-12-08 05:14:24.007225] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.272 [2024-12-08 05:14:24.007232] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d8b510) 00:15:34.272 [2024-12-08 05:14:24.007244] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.272 [2024-12-08 05:14:24.007268] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7cc0, cid 3, qid 0 00:15:34.272 [2024-12-08 05:14:24.007319] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.272 [2024-12-08 05:14:24.007333] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.272 [2024-12-08 05:14:24.007337] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.272 [2024-12-08 05:14:24.007342] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd7cc0) on tqpair=0x1d8b510 00:15:34.272 [2024-12-08 05:14:24.007355] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.272 [2024-12-08 05:14:24.007360] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.272 [2024-12-08 05:14:24.007364] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d8b510) 00:15:34.272 [2024-12-08 05:14:24.007372] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.272 [2024-12-08 05:14:24.007393] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7cc0, cid 3, qid 0 00:15:34.272 [2024-12-08 05:14:24.007454] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.272 [2024-12-08 05:14:24.007478] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.272 [2024-12-08 05:14:24.007483] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.272 [2024-12-08 05:14:24.007487] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd7cc0) on tqpair=0x1d8b510 00:15:34.272 [2024-12-08 05:14:24.007501] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.272 [2024-12-08 05:14:24.007505] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.272 [2024-12-08 05:14:24.007509] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d8b510) 00:15:34.272 [2024-12-08 05:14:24.007518] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.272 [2024-12-08 05:14:24.007545] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7cc0, cid 3, qid 0 00:15:34.272 [2024-12-08 05:14:24.007593] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.272 [2024-12-08 05:14:24.007604] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.272 [2024-12-08 05:14:24.007609] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.272 [2024-12-08 05:14:24.007613] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd7cc0) on tqpair=0x1d8b510 00:15:34.272 [2024-12-08 05:14:24.007626] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.272 [2024-12-08 05:14:24.007633] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.272 [2024-12-08 05:14:24.007640] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d8b510) 00:15:34.272 [2024-12-08 05:14:24.007652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.272 [2024-12-08 05:14:24.007693] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7cc0, cid 3, qid 0 00:15:34.272 [2024-12-08 05:14:24.007743] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.272 [2024-12-08 05:14:24.007755] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.272 [2024-12-08 05:14:24.007761] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.272 [2024-12-08 05:14:24.007768] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd7cc0) on tqpair=0x1d8b510 00:15:34.272 [2024-12-08 05:14:24.007787] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.272 [2024-12-08 05:14:24.007793] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.272 [2024-12-08 05:14:24.007797] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d8b510) 00:15:34.272 [2024-12-08 05:14:24.007805] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.272 [2024-12-08 05:14:24.007827] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7cc0, cid 3, qid 0 00:15:34.272 [2024-12-08 05:14:24.007880] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.272 [2024-12-08 05:14:24.007890] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.272 [2024-12-08 05:14:24.007897] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.272 [2024-12-08 05:14:24.007904] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd7cc0) on tqpair=0x1d8b510 00:15:34.272 [2024-12-08 05:14:24.007923] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.272 [2024-12-08 05:14:24.007932] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.272 [2024-12-08 05:14:24.007938] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d8b510) 00:15:34.272 [2024-12-08 05:14:24.007950] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.272 [2024-12-08 05:14:24.007976] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7cc0, cid 3, qid 0 00:15:34.272 [2024-12-08 05:14:24.008023] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.272 [2024-12-08 05:14:24.008036] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.272 [2024-12-08 05:14:24.008040] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.272 [2024-12-08 05:14:24.008045] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd7cc0) on tqpair=0x1d8b510 00:15:34.272 [2024-12-08 05:14:24.008058] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.272 [2024-12-08 05:14:24.008063] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.272 [2024-12-08 05:14:24.008067] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d8b510) 00:15:34.272 [2024-12-08 05:14:24.008075] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.272 [2024-12-08 05:14:24.008094] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7cc0, cid 3, qid 0 00:15:34.272 [2024-12-08 05:14:24.008144] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.272 [2024-12-08 05:14:24.008157] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.272 [2024-12-08 05:14:24.008164] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.272 [2024-12-08 05:14:24.008170] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd7cc0) on tqpair=0x1d8b510 00:15:34.272 [2024-12-08 05:14:24.008183] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.272 [2024-12-08 05:14:24.008188] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.272 [2024-12-08 05:14:24.008192] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d8b510) 00:15:34.272 [2024-12-08 05:14:24.008200] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.272 [2024-12-08 05:14:24.008221] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7cc0, cid 3, qid 0 00:15:34.272 [2024-12-08 05:14:24.008273] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.272 [2024-12-08 05:14:24.008286] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.272 [2024-12-08 05:14:24.008293] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.272 [2024-12-08 05:14:24.008301] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd7cc0) on tqpair=0x1d8b510 00:15:34.272 [2024-12-08 05:14:24.008316] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.272 [2024-12-08 05:14:24.008321] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.272 [2024-12-08 05:14:24.008325] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d8b510) 00:15:34.272 [2024-12-08 05:14:24.008333] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.272 [2024-12-08 05:14:24.008354] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7cc0, cid 3, qid 0 00:15:34.272 [2024-12-08 05:14:24.008406] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.272 [2024-12-08 05:14:24.008416] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.272 [2024-12-08 05:14:24.008423] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.272 [2024-12-08 05:14:24.008430] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd7cc0) on tqpair=0x1d8b510 00:15:34.272 [2024-12-08 05:14:24.008448] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.272 [2024-12-08 05:14:24.008458] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.272 [2024-12-08 05:14:24.008464] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d8b510) 00:15:34.272 [2024-12-08 05:14:24.008479] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.272 [2024-12-08 05:14:24.008519] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7cc0, cid 3, qid 0 00:15:34.272 [2024-12-08 05:14:24.008573] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.272 [2024-12-08 05:14:24.008582] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.272 [2024-12-08 05:14:24.008586] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.272 [2024-12-08 05:14:24.008591] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd7cc0) on tqpair=0x1d8b510 00:15:34.272 [2024-12-08 05:14:24.008605] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.272 [2024-12-08 05:14:24.008610] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.273 [2024-12-08 05:14:24.008614] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d8b510) 00:15:34.273 [2024-12-08 05:14:24.008623] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.273 [2024-12-08 05:14:24.008648] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7cc0, cid 3, qid 0 00:15:34.273 [2024-12-08 05:14:24.008714] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.273 [2024-12-08 05:14:24.008726] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.273 [2024-12-08 05:14:24.008730] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.273 [2024-12-08 05:14:24.008735] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd7cc0) on tqpair=0x1d8b510 00:15:34.273 [2024-12-08 05:14:24.008748] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.273 [2024-12-08 05:14:24.008753] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.273 [2024-12-08 05:14:24.008757] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d8b510) 00:15:34.273 [2024-12-08 05:14:24.008766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.273 [2024-12-08 05:14:24.008789] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7cc0, cid 3, qid 0 00:15:34.273 [2024-12-08 05:14:24.008845] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.273 [2024-12-08 05:14:24.008858] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.273 [2024-12-08 05:14:24.008866] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.273 [2024-12-08 05:14:24.008873] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd7cc0) on tqpair=0x1d8b510 00:15:34.273 [2024-12-08 05:14:24.008888] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.273 [2024-12-08 05:14:24.008893] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.273 [2024-12-08 05:14:24.008897] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d8b510) 00:15:34.273 [2024-12-08 05:14:24.008905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.273 [2024-12-08 05:14:24.008926] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7cc0, cid 3, qid 0 00:15:34.273 [2024-12-08 05:14:24.008977] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.273 [2024-12-08 05:14:24.008984] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.273 [2024-12-08 05:14:24.008990] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.273 [2024-12-08 05:14:24.008997] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd7cc0) on tqpair=0x1d8b510 00:15:34.273 [2024-12-08 05:14:24.009015] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.273 [2024-12-08 05:14:24.009023] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.273 [2024-12-08 05:14:24.009030] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d8b510) 00:15:34.273 [2024-12-08 05:14:24.009042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.273 [2024-12-08 05:14:24.009070] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7cc0, cid 3, qid 0 00:15:34.273 [2024-12-08 05:14:24.009116] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.273 [2024-12-08 05:14:24.009127] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.273 [2024-12-08 05:14:24.009133] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.273 [2024-12-08 05:14:24.009140] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd7cc0) on tqpair=0x1d8b510 00:15:34.273 [2024-12-08 05:14:24.009165] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.273 [2024-12-08 05:14:24.009174] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.273 [2024-12-08 05:14:24.009180] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d8b510) 00:15:34.273 [2024-12-08 05:14:24.009193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.273 [2024-12-08 05:14:24.009217] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7cc0, cid 3, qid 0 00:15:34.273 [2024-12-08 05:14:24.009269] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.273 [2024-12-08 05:14:24.009279] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.273 [2024-12-08 05:14:24.009284] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.273 [2024-12-08 05:14:24.009288] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd7cc0) on tqpair=0x1d8b510 00:15:34.273 [2024-12-08 05:14:24.009301] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.273 [2024-12-08 05:14:24.009306] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.273 [2024-12-08 05:14:24.009310] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d8b510) 00:15:34.273 [2024-12-08 05:14:24.009318] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.273 [2024-12-08 05:14:24.009345] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7cc0, cid 3, qid 0 00:15:34.273 [2024-12-08 05:14:24.009390] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.273 [2024-12-08 05:14:24.009400] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.273 [2024-12-08 05:14:24.009404] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.273 [2024-12-08 05:14:24.009409] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd7cc0) on tqpair=0x1d8b510 00:15:34.273 [2024-12-08 05:14:24.009421] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.273 [2024-12-08 05:14:24.009426] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.273 [2024-12-08 05:14:24.009430] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d8b510) 00:15:34.273 [2024-12-08 05:14:24.009438] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.273 [2024-12-08 05:14:24.009459] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7cc0, cid 3, qid 0 00:15:34.273 [2024-12-08 05:14:24.009509] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.273 [2024-12-08 05:14:24.009521] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.273 [2024-12-08 05:14:24.009529] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.273 [2024-12-08 05:14:24.009536] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd7cc0) on tqpair=0x1d8b510 00:15:34.273 [2024-12-08 05:14:24.009549] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.273 [2024-12-08 05:14:24.009555] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.273 [2024-12-08 05:14:24.009561] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d8b510) 00:15:34.273 [2024-12-08 05:14:24.009573] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.273 [2024-12-08 05:14:24.009603] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7cc0, cid 3, qid 0 00:15:34.273 [2024-12-08 05:14:24.009648] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.273 [2024-12-08 05:14:24.009660] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.273 [2024-12-08 05:14:24.009667] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.273 [2024-12-08 05:14:24.013697] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd7cc0) on tqpair=0x1d8b510 00:15:34.273 [2024-12-08 05:14:24.013723] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:34.273 [2024-12-08 05:14:24.013729] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:34.273 [2024-12-08 05:14:24.013733] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d8b510) 00:15:34.273 [2024-12-08 05:14:24.013743] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.273 [2024-12-08 05:14:24.013772] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7cc0, cid 3, qid 0 00:15:34.273 [2024-12-08 05:14:24.013836] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:34.273 [2024-12-08 05:14:24.013850] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:34.273 [2024-12-08 05:14:24.013858] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:34.273 [2024-12-08 05:14:24.013865] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd7cc0) on tqpair=0x1d8b510 00:15:34.273 [2024-12-08 05:14:24.013877] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:15:34.273 0 Kelvin (-273 Celsius) 00:15:34.273 Available Spare: 0% 00:15:34.273 Available Spare Threshold: 0% 00:15:34.273 Life Percentage Used: 0% 00:15:34.273 Data Units Read: 0 00:15:34.273 Data Units Written: 0 00:15:34.273 Host Read Commands: 0 00:15:34.273 Host Write Commands: 0 00:15:34.273 Controller Busy Time: 0 minutes 00:15:34.273 Power Cycles: 0 00:15:34.273 Power On Hours: 0 hours 00:15:34.273 Unsafe Shutdowns: 0 00:15:34.273 Unrecoverable Media Errors: 0 00:15:34.273 Lifetime Error Log Entries: 0 00:15:34.273 Warning Temperature Time: 0 minutes 00:15:34.273 Critical Temperature Time: 0 minutes 00:15:34.273 00:15:34.273 Number of Queues 00:15:34.273 ================ 00:15:34.273 Number of I/O Submission Queues: 127 00:15:34.273 Number of I/O Completion Queues: 127 00:15:34.273 00:15:34.273 Active Namespaces 00:15:34.273 ================= 00:15:34.273 Namespace ID:1 00:15:34.273 Error Recovery Timeout: Unlimited 00:15:34.273 Command Set Identifier: NVM (00h) 00:15:34.273 Deallocate: Supported 00:15:34.273 Deallocated/Unwritten Error: Not Supported 00:15:34.273 Deallocated Read Value: Unknown 00:15:34.273 Deallocate in Write Zeroes: Not Supported 00:15:34.273 Deallocated Guard Field: 0xFFFF 00:15:34.273 Flush: Supported 00:15:34.273 Reservation: Supported 00:15:34.273 Namespace Sharing Capabilities: Multiple Controllers 00:15:34.273 Size (in LBAs): 131072 (0GiB) 00:15:34.273 Capacity (in LBAs): 131072 (0GiB) 00:15:34.273 Utilization (in LBAs): 131072 (0GiB) 00:15:34.273 NGUID: ABCDEF0123456789ABCDEF0123456789 00:15:34.273 EUI64: ABCDEF0123456789 00:15:34.273 UUID: 31dae611-b83f-4040-94c6-ca1cb0b9d4b1 00:15:34.273 Thin Provisioning: Not Supported 00:15:34.273 Per-NS Atomic Units: Yes 00:15:34.273 Atomic Boundary Size (Normal): 0 00:15:34.273 Atomic Boundary Size (PFail): 0 00:15:34.273 Atomic Boundary Offset: 0 00:15:34.273 Maximum Single Source Range Length: 65535 00:15:34.273 Maximum Copy Length: 65535 00:15:34.273 Maximum Source Range Count: 1 00:15:34.273 NGUID/EUI64 Never Reused: No 00:15:34.274 Namespace Write Protected: No 00:15:34.274 Number of LBA Formats: 1 00:15:34.274 Current LBA Format: LBA Format #00 00:15:34.274 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:34.274 00:15:34.274 05:14:24 -- host/identify.sh@51 -- # sync 00:15:34.531 05:14:24 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:34.531 05:14:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.531 05:14:24 -- common/autotest_common.sh@10 -- # set +x 00:15:34.531 05:14:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.531 05:14:24 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:15:34.531 05:14:24 -- host/identify.sh@56 -- # nvmftestfini 00:15:34.531 05:14:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:34.531 05:14:24 -- nvmf/common.sh@116 -- # sync 00:15:34.531 05:14:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:34.531 05:14:24 -- nvmf/common.sh@119 -- # set +e 00:15:34.531 05:14:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:34.531 05:14:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:34.531 rmmod nvme_tcp 00:15:34.531 rmmod nvme_fabrics 00:15:34.531 rmmod nvme_keyring 00:15:34.531 05:14:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:34.531 05:14:24 -- nvmf/common.sh@123 -- # set -e 00:15:34.531 05:14:24 -- nvmf/common.sh@124 -- # return 0 00:15:34.531 05:14:24 -- nvmf/common.sh@477 -- # '[' -n 80557 ']' 00:15:34.531 05:14:24 -- nvmf/common.sh@478 -- # killprocess 80557 00:15:34.531 05:14:24 -- common/autotest_common.sh@936 -- # '[' -z 80557 ']' 00:15:34.531 05:14:24 -- common/autotest_common.sh@940 -- # kill -0 80557 00:15:34.531 05:14:24 -- common/autotest_common.sh@941 -- # uname 00:15:34.531 05:14:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:34.531 05:14:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80557 00:15:34.531 05:14:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:34.531 05:14:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:34.531 killing process with pid 80557 00:15:34.531 05:14:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80557' 00:15:34.531 05:14:24 -- common/autotest_common.sh@955 -- # kill 80557 00:15:34.531 [2024-12-08 05:14:24.167428] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:15:34.531 05:14:24 -- common/autotest_common.sh@960 -- # wait 80557 00:15:34.788 05:14:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:34.788 05:14:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:34.788 05:14:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:34.788 05:14:24 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:34.788 05:14:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:34.788 05:14:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:34.788 05:14:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:34.788 05:14:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:34.788 05:14:24 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:34.788 00:15:34.788 real 0m2.545s 00:15:34.788 user 0m7.326s 00:15:34.788 sys 0m0.567s 00:15:34.788 05:14:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:34.788 ************************************ 00:15:34.788 05:14:24 -- common/autotest_common.sh@10 -- # set +x 00:15:34.788 END TEST nvmf_identify 00:15:34.788 ************************************ 00:15:34.788 05:14:24 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:15:34.788 05:14:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:34.788 05:14:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:34.788 05:14:24 -- common/autotest_common.sh@10 -- # set +x 00:15:34.788 ************************************ 00:15:34.788 START TEST nvmf_perf 00:15:34.788 ************************************ 00:15:34.788 05:14:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:15:34.788 * Looking for test storage... 00:15:34.788 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:34.788 05:14:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:34.788 05:14:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:34.788 05:14:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:34.788 05:14:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:34.788 05:14:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:34.788 05:14:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:34.788 05:14:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:34.788 05:14:24 -- scripts/common.sh@335 -- # IFS=.-: 00:15:34.788 05:14:24 -- scripts/common.sh@335 -- # read -ra ver1 00:15:34.788 05:14:24 -- scripts/common.sh@336 -- # IFS=.-: 00:15:34.788 05:14:24 -- scripts/common.sh@336 -- # read -ra ver2 00:15:34.788 05:14:24 -- scripts/common.sh@337 -- # local 'op=<' 00:15:34.788 05:14:24 -- scripts/common.sh@339 -- # ver1_l=2 00:15:34.788 05:14:24 -- scripts/common.sh@340 -- # ver2_l=1 00:15:34.788 05:14:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:34.788 05:14:24 -- scripts/common.sh@343 -- # case "$op" in 00:15:34.788 05:14:24 -- scripts/common.sh@344 -- # : 1 00:15:34.788 05:14:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:34.788 05:14:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:34.788 05:14:24 -- scripts/common.sh@364 -- # decimal 1 00:15:34.788 05:14:24 -- scripts/common.sh@352 -- # local d=1 00:15:34.788 05:14:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:34.788 05:14:24 -- scripts/common.sh@354 -- # echo 1 00:15:34.788 05:14:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:34.788 05:14:24 -- scripts/common.sh@365 -- # decimal 2 00:15:34.788 05:14:24 -- scripts/common.sh@352 -- # local d=2 00:15:34.788 05:14:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:34.788 05:14:24 -- scripts/common.sh@354 -- # echo 2 00:15:35.046 05:14:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:35.046 05:14:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:35.046 05:14:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:35.046 05:14:24 -- scripts/common.sh@367 -- # return 0 00:15:35.046 05:14:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:35.046 05:14:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:35.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:35.046 --rc genhtml_branch_coverage=1 00:15:35.046 --rc genhtml_function_coverage=1 00:15:35.046 --rc genhtml_legend=1 00:15:35.046 --rc geninfo_all_blocks=1 00:15:35.046 --rc geninfo_unexecuted_blocks=1 00:15:35.046 00:15:35.046 ' 00:15:35.046 05:14:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:35.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:35.046 --rc genhtml_branch_coverage=1 00:15:35.046 --rc genhtml_function_coverage=1 00:15:35.046 --rc genhtml_legend=1 00:15:35.046 --rc geninfo_all_blocks=1 00:15:35.046 --rc geninfo_unexecuted_blocks=1 00:15:35.046 00:15:35.046 ' 00:15:35.046 05:14:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:35.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:35.046 --rc genhtml_branch_coverage=1 00:15:35.046 --rc genhtml_function_coverage=1 00:15:35.046 --rc genhtml_legend=1 00:15:35.046 --rc geninfo_all_blocks=1 00:15:35.046 --rc geninfo_unexecuted_blocks=1 00:15:35.046 00:15:35.046 ' 00:15:35.046 05:14:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:35.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:35.046 --rc genhtml_branch_coverage=1 00:15:35.046 --rc genhtml_function_coverage=1 00:15:35.046 --rc genhtml_legend=1 00:15:35.046 --rc geninfo_all_blocks=1 00:15:35.046 --rc geninfo_unexecuted_blocks=1 00:15:35.046 00:15:35.046 ' 00:15:35.046 05:14:24 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:35.046 05:14:24 -- nvmf/common.sh@7 -- # uname -s 00:15:35.046 05:14:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:35.046 05:14:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:35.046 05:14:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:35.046 05:14:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:35.046 05:14:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:35.046 05:14:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:35.046 05:14:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:35.046 05:14:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:35.046 05:14:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:35.046 05:14:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:35.046 05:14:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:15:35.046 05:14:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:15:35.046 05:14:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:35.046 05:14:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:35.046 05:14:24 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:35.046 05:14:24 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:35.046 05:14:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:35.046 05:14:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:35.046 05:14:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:35.046 05:14:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.046 05:14:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.046 05:14:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.046 05:14:24 -- paths/export.sh@5 -- # export PATH 00:15:35.047 05:14:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.047 05:14:24 -- nvmf/common.sh@46 -- # : 0 00:15:35.047 05:14:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:35.047 05:14:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:35.047 05:14:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:35.047 05:14:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:35.047 05:14:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:35.047 05:14:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:35.047 05:14:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:35.047 05:14:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:35.047 05:14:24 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:35.047 05:14:24 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:35.047 05:14:24 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:35.047 05:14:24 -- host/perf.sh@17 -- # nvmftestinit 00:15:35.047 05:14:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:35.047 05:14:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:35.047 05:14:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:35.047 05:14:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:35.047 05:14:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:35.047 05:14:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:35.047 05:14:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:35.047 05:14:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:35.047 05:14:24 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:35.047 05:14:24 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:35.047 05:14:24 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:35.047 05:14:24 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:35.047 05:14:24 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:35.047 05:14:24 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:35.047 05:14:24 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:35.047 05:14:24 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:35.047 05:14:24 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:35.047 05:14:24 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:35.047 05:14:24 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:35.047 05:14:24 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:35.047 05:14:24 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:35.047 05:14:24 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:35.047 05:14:24 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:35.047 05:14:24 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:35.047 05:14:24 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:35.047 05:14:24 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:35.047 05:14:24 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:35.047 05:14:24 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:35.047 Cannot find device "nvmf_tgt_br" 00:15:35.047 05:14:24 -- nvmf/common.sh@154 -- # true 00:15:35.047 05:14:24 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:35.047 Cannot find device "nvmf_tgt_br2" 00:15:35.047 05:14:24 -- nvmf/common.sh@155 -- # true 00:15:35.047 05:14:24 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:35.047 05:14:24 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:35.047 Cannot find device "nvmf_tgt_br" 00:15:35.047 05:14:24 -- nvmf/common.sh@157 -- # true 00:15:35.047 05:14:24 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:35.047 Cannot find device "nvmf_tgt_br2" 00:15:35.047 05:14:24 -- nvmf/common.sh@158 -- # true 00:15:35.047 05:14:24 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:35.047 05:14:24 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:35.047 05:14:24 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:35.047 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:35.047 05:14:24 -- nvmf/common.sh@161 -- # true 00:15:35.047 05:14:24 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:35.047 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:35.047 05:14:24 -- nvmf/common.sh@162 -- # true 00:15:35.047 05:14:24 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:35.047 05:14:24 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:35.047 05:14:24 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:35.047 05:14:24 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:35.047 05:14:24 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:35.047 05:14:24 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:35.047 05:14:24 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:35.047 05:14:24 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:35.047 05:14:24 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:35.047 05:14:24 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:35.047 05:14:24 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:35.047 05:14:24 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:35.047 05:14:24 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:35.047 05:14:24 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:35.047 05:14:24 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:35.047 05:14:24 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:35.305 05:14:24 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:35.305 05:14:24 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:35.305 05:14:24 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:35.305 05:14:24 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:35.305 05:14:24 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:35.305 05:14:24 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:35.305 05:14:24 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:35.305 05:14:24 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:35.305 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:35.305 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:15:35.305 00:15:35.305 --- 10.0.0.2 ping statistics --- 00:15:35.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.305 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:15:35.305 05:14:24 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:35.305 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:35.305 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:15:35.305 00:15:35.305 --- 10.0.0.3 ping statistics --- 00:15:35.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.305 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:15:35.305 05:14:24 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:35.305 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:35.305 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:15:35.305 00:15:35.305 --- 10.0.0.1 ping statistics --- 00:15:35.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.305 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:15:35.305 05:14:24 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:35.305 05:14:24 -- nvmf/common.sh@421 -- # return 0 00:15:35.305 05:14:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:35.305 05:14:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:35.305 05:14:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:35.305 05:14:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:35.305 05:14:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:35.305 05:14:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:35.305 05:14:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:35.305 05:14:24 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:15:35.305 05:14:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:35.305 05:14:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:35.305 05:14:24 -- common/autotest_common.sh@10 -- # set +x 00:15:35.305 05:14:24 -- nvmf/common.sh@469 -- # nvmfpid=80778 00:15:35.305 05:14:24 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:35.305 05:14:24 -- nvmf/common.sh@470 -- # waitforlisten 80778 00:15:35.305 05:14:24 -- common/autotest_common.sh@829 -- # '[' -z 80778 ']' 00:15:35.305 05:14:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.305 05:14:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:35.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:35.305 05:14:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.305 05:14:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:35.305 05:14:24 -- common/autotest_common.sh@10 -- # set +x 00:15:35.305 [2024-12-08 05:14:24.991965] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:35.305 [2024-12-08 05:14:24.992091] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:35.563 [2024-12-08 05:14:25.138769] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:35.563 [2024-12-08 05:14:25.176350] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:35.563 [2024-12-08 05:14:25.176799] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:35.563 [2024-12-08 05:14:25.176943] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:35.563 [2024-12-08 05:14:25.177153] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:35.563 [2024-12-08 05:14:25.177483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:35.563 [2024-12-08 05:14:25.177581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:35.563 [2024-12-08 05:14:25.177980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:35.563 [2024-12-08 05:14:25.177999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.493 05:14:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:36.493 05:14:25 -- common/autotest_common.sh@862 -- # return 0 00:15:36.493 05:14:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:36.493 05:14:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:36.493 05:14:25 -- common/autotest_common.sh@10 -- # set +x 00:15:36.493 05:14:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:36.493 05:14:25 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:36.493 05:14:25 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:15:36.750 05:14:26 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:15:36.750 05:14:26 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:15:37.315 05:14:26 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:06.0 00:15:37.315 05:14:26 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:37.572 05:14:27 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:15:37.572 05:14:27 -- host/perf.sh@33 -- # '[' -n 0000:00:06.0 ']' 00:15:37.572 05:14:27 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:15:37.572 05:14:27 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:15:37.572 05:14:27 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:37.830 [2024-12-08 05:14:27.611598] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:38.088 05:14:27 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:38.345 05:14:27 -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:38.345 05:14:27 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:38.603 05:14:28 -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:38.603 05:14:28 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:15:38.861 05:14:28 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:39.119 [2024-12-08 05:14:28.887754] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:39.377 05:14:28 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:39.634 05:14:29 -- host/perf.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:15:39.634 05:14:29 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:15:39.634 05:14:29 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:15:39.634 05:14:29 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:15:40.619 Initializing NVMe Controllers 00:15:40.619 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:15:40.619 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:15:40.619 Initialization complete. Launching workers. 00:15:40.619 ======================================================== 00:15:40.619 Latency(us) 00:15:40.619 Device Information : IOPS MiB/s Average min max 00:15:40.619 PCIE (0000:00:06.0) NSID 1 from core 0: 26368.97 103.00 1212.87 241.04 5179.70 00:15:40.619 ======================================================== 00:15:40.619 Total : 26368.97 103.00 1212.87 241.04 5179.70 00:15:40.619 00:15:40.619 05:14:30 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:41.993 Initializing NVMe Controllers 00:15:41.993 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:41.993 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:41.993 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:41.993 Initialization complete. Launching workers. 00:15:41.993 ======================================================== 00:15:41.993 Latency(us) 00:15:41.993 Device Information : IOPS MiB/s Average min max 00:15:41.993 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3163.20 12.36 315.71 113.75 4302.58 00:15:41.994 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.50 0.48 8160.25 5015.34 12061.71 00:15:41.994 ======================================================== 00:15:41.994 Total : 3286.71 12.84 610.47 113.75 12061.71 00:15:41.994 00:15:41.994 05:14:31 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:43.366 Initializing NVMe Controllers 00:15:43.366 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:43.366 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:43.366 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:43.366 Initialization complete. Launching workers. 00:15:43.366 ======================================================== 00:15:43.366 Latency(us) 00:15:43.366 Device Information : IOPS MiB/s Average min max 00:15:43.366 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7280.23 28.44 4395.53 548.90 16063.64 00:15:43.366 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3808.33 14.88 8462.43 4840.28 23038.74 00:15:43.366 ======================================================== 00:15:43.366 Total : 11088.56 43.31 5792.30 548.90 23038.74 00:15:43.366 00:15:43.366 05:14:33 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:15:43.366 05:14:33 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:45.935 Initializing NVMe Controllers 00:15:45.935 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:45.935 Controller IO queue size 128, less than required. 00:15:45.935 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:45.935 Controller IO queue size 128, less than required. 00:15:45.935 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:45.936 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:45.936 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:45.936 Initialization complete. Launching workers. 00:15:45.936 ======================================================== 00:15:45.936 Latency(us) 00:15:45.936 Device Information : IOPS MiB/s Average min max 00:15:45.936 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1336.29 334.07 96776.24 41984.88 217017.47 00:15:45.936 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 543.41 135.85 240575.28 81251.80 387997.98 00:15:45.936 ======================================================== 00:15:45.936 Total : 1879.70 469.93 138347.93 41984.88 387997.98 00:15:45.936 00:15:45.936 05:14:35 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:15:46.193 No valid NVMe controllers or AIO or URING devices found 00:15:46.193 Initializing NVMe Controllers 00:15:46.193 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:46.193 Controller IO queue size 128, less than required. 00:15:46.193 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:46.193 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:15:46.193 Controller IO queue size 128, less than required. 00:15:46.193 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:46.193 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:15:46.193 WARNING: Some requested NVMe devices were skipped 00:15:46.193 05:14:35 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:15:48.818 Initializing NVMe Controllers 00:15:48.818 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:48.818 Controller IO queue size 128, less than required. 00:15:48.818 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:48.818 Controller IO queue size 128, less than required. 00:15:48.818 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:48.818 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:48.818 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:48.818 Initialization complete. Launching workers. 00:15:48.818 00:15:48.818 ==================== 00:15:48.818 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:15:48.818 TCP transport: 00:15:48.818 polls: 11211 00:15:48.818 idle_polls: 0 00:15:48.818 sock_completions: 11211 00:15:48.818 nvme_completions: 7118 00:15:48.818 submitted_requests: 10952 00:15:48.818 queued_requests: 1 00:15:48.818 00:15:48.818 ==================== 00:15:48.818 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:15:48.818 TCP transport: 00:15:48.818 polls: 11720 00:15:48.818 idle_polls: 0 00:15:48.818 sock_completions: 11720 00:15:48.818 nvme_completions: 5070 00:15:48.818 submitted_requests: 7758 00:15:48.818 queued_requests: 1 00:15:48.818 ======================================================== 00:15:48.818 Latency(us) 00:15:48.818 Device Information : IOPS MiB/s Average min max 00:15:48.818 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1841.75 460.44 70172.39 32037.76 128837.62 00:15:48.818 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1330.10 332.53 97128.65 37296.65 187853.47 00:15:48.818 ======================================================== 00:15:48.818 Total : 3171.85 792.96 81476.35 32037.76 187853.47 00:15:48.818 00:15:48.818 05:14:38 -- host/perf.sh@66 -- # sync 00:15:48.818 05:14:38 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:49.076 05:14:38 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:15:49.076 05:14:38 -- host/perf.sh@71 -- # '[' -n 0000:00:06.0 ']' 00:15:49.076 05:14:38 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:15:49.642 05:14:39 -- host/perf.sh@72 -- # ls_guid=132481dd-bf60-4d6a-8ed3-02134afd6ae5 00:15:49.642 05:14:39 -- host/perf.sh@73 -- # get_lvs_free_mb 132481dd-bf60-4d6a-8ed3-02134afd6ae5 00:15:49.642 05:14:39 -- common/autotest_common.sh@1353 -- # local lvs_uuid=132481dd-bf60-4d6a-8ed3-02134afd6ae5 00:15:49.642 05:14:39 -- common/autotest_common.sh@1354 -- # local lvs_info 00:15:49.642 05:14:39 -- common/autotest_common.sh@1355 -- # local fc 00:15:49.642 05:14:39 -- common/autotest_common.sh@1356 -- # local cs 00:15:49.642 05:14:39 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:49.900 05:14:39 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:15:49.900 { 00:15:49.900 "uuid": "132481dd-bf60-4d6a-8ed3-02134afd6ae5", 00:15:49.900 "name": "lvs_0", 00:15:49.900 "base_bdev": "Nvme0n1", 00:15:49.900 "total_data_clusters": 1278, 00:15:49.900 "free_clusters": 1278, 00:15:49.900 "block_size": 4096, 00:15:49.900 "cluster_size": 4194304 00:15:49.900 } 00:15:49.900 ]' 00:15:49.900 05:14:39 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="132481dd-bf60-4d6a-8ed3-02134afd6ae5") .free_clusters' 00:15:49.900 05:14:39 -- common/autotest_common.sh@1358 -- # fc=1278 00:15:49.900 05:14:39 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="132481dd-bf60-4d6a-8ed3-02134afd6ae5") .cluster_size' 00:15:50.157 5112 00:15:50.157 05:14:39 -- common/autotest_common.sh@1359 -- # cs=4194304 00:15:50.157 05:14:39 -- common/autotest_common.sh@1362 -- # free_mb=5112 00:15:50.157 05:14:39 -- common/autotest_common.sh@1363 -- # echo 5112 00:15:50.157 05:14:39 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:15:50.157 05:14:39 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 132481dd-bf60-4d6a-8ed3-02134afd6ae5 lbd_0 5112 00:15:50.416 05:14:40 -- host/perf.sh@80 -- # lb_guid=4a6fc8a9-f640-4723-92b1-f38e770faa4e 00:15:50.416 05:14:40 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 4a6fc8a9-f640-4723-92b1-f38e770faa4e lvs_n_0 00:15:50.982 05:14:40 -- host/perf.sh@83 -- # ls_nested_guid=cddbdd88-82ed-4865-8917-5cd0677af754 00:15:50.982 05:14:40 -- host/perf.sh@84 -- # get_lvs_free_mb cddbdd88-82ed-4865-8917-5cd0677af754 00:15:50.982 05:14:40 -- common/autotest_common.sh@1353 -- # local lvs_uuid=cddbdd88-82ed-4865-8917-5cd0677af754 00:15:50.982 05:14:40 -- common/autotest_common.sh@1354 -- # local lvs_info 00:15:50.982 05:14:40 -- common/autotest_common.sh@1355 -- # local fc 00:15:50.982 05:14:40 -- common/autotest_common.sh@1356 -- # local cs 00:15:50.982 05:14:40 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:51.239 05:14:40 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:15:51.239 { 00:15:51.239 "uuid": "132481dd-bf60-4d6a-8ed3-02134afd6ae5", 00:15:51.239 "name": "lvs_0", 00:15:51.239 "base_bdev": "Nvme0n1", 00:15:51.239 "total_data_clusters": 1278, 00:15:51.239 "free_clusters": 0, 00:15:51.239 "block_size": 4096, 00:15:51.239 "cluster_size": 4194304 00:15:51.239 }, 00:15:51.239 { 00:15:51.239 "uuid": "cddbdd88-82ed-4865-8917-5cd0677af754", 00:15:51.239 "name": "lvs_n_0", 00:15:51.239 "base_bdev": "4a6fc8a9-f640-4723-92b1-f38e770faa4e", 00:15:51.239 "total_data_clusters": 1276, 00:15:51.239 "free_clusters": 1276, 00:15:51.239 "block_size": 4096, 00:15:51.239 "cluster_size": 4194304 00:15:51.239 } 00:15:51.239 ]' 00:15:51.239 05:14:40 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="cddbdd88-82ed-4865-8917-5cd0677af754") .free_clusters' 00:15:51.239 05:14:40 -- common/autotest_common.sh@1358 -- # fc=1276 00:15:51.239 05:14:40 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="cddbdd88-82ed-4865-8917-5cd0677af754") .cluster_size' 00:15:51.239 05:14:41 -- common/autotest_common.sh@1359 -- # cs=4194304 00:15:51.239 05:14:41 -- common/autotest_common.sh@1362 -- # free_mb=5104 00:15:51.239 05:14:41 -- common/autotest_common.sh@1363 -- # echo 5104 00:15:51.239 5104 00:15:51.239 05:14:41 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:15:51.240 05:14:41 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u cddbdd88-82ed-4865-8917-5cd0677af754 lbd_nest_0 5104 00:15:51.804 05:14:41 -- host/perf.sh@88 -- # lb_nested_guid=6ea303be-8d8a-4ffe-8c92-8d02fc10225d 00:15:51.804 05:14:41 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:52.061 05:14:41 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:15:52.061 05:14:41 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 6ea303be-8d8a-4ffe-8c92-8d02fc10225d 00:15:52.318 05:14:42 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:52.884 05:14:42 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:15:52.884 05:14:42 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:15:52.884 05:14:42 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:15:52.884 05:14:42 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:15:52.884 05:14:42 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:53.450 No valid NVMe controllers or AIO or URING devices found 00:15:53.450 Initializing NVMe Controllers 00:15:53.450 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:53.450 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:15:53.450 WARNING: Some requested NVMe devices were skipped 00:15:53.450 05:14:42 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:15:53.450 05:14:42 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:16:03.466 Initializing NVMe Controllers 00:16:03.466 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:03.466 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:03.466 Initialization complete. Launching workers. 00:16:03.466 ======================================================== 00:16:03.466 Latency(us) 00:16:03.466 Device Information : IOPS MiB/s Average min max 00:16:03.466 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1090.10 136.26 916.46 322.82 7391.63 00:16:03.466 ======================================================== 00:16:03.466 Total : 1090.10 136.26 916.46 322.82 7391.63 00:16:03.466 00:16:03.466 05:14:53 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:16:03.466 05:14:53 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:16:03.466 05:14:53 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:16:03.724 No valid NVMe controllers or AIO or URING devices found 00:16:03.984 Initializing NVMe Controllers 00:16:03.984 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:03.984 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:16:03.984 WARNING: Some requested NVMe devices were skipped 00:16:03.984 05:14:53 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:16:03.984 05:14:53 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:16:16.184 Initializing NVMe Controllers 00:16:16.184 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:16.184 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:16.184 Initialization complete. Launching workers. 00:16:16.184 ======================================================== 00:16:16.184 Latency(us) 00:16:16.184 Device Information : IOPS MiB/s Average min max 00:16:16.184 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1284.00 160.50 24951.32 6352.26 75951.64 00:16:16.184 ======================================================== 00:16:16.184 Total : 1284.00 160.50 24951.32 6352.26 75951.64 00:16:16.184 00:16:16.185 05:15:03 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:16:16.185 05:15:03 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:16:16.185 05:15:03 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:16:16.185 No valid NVMe controllers or AIO or URING devices found 00:16:16.185 Initializing NVMe Controllers 00:16:16.185 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:16.185 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:16:16.185 WARNING: Some requested NVMe devices were skipped 00:16:16.185 05:15:04 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:16:16.185 05:15:04 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:16:26.267 Initializing NVMe Controllers 00:16:26.267 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:26.267 Controller IO queue size 128, less than required. 00:16:26.267 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:26.267 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:26.267 Initialization complete. Launching workers. 00:16:26.267 ======================================================== 00:16:26.267 Latency(us) 00:16:26.267 Device Information : IOPS MiB/s Average min max 00:16:26.267 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3393.68 424.21 37726.56 10506.06 135405.95 00:16:26.267 ======================================================== 00:16:26.267 Total : 3393.68 424.21 37726.56 10506.06 135405.95 00:16:26.267 00:16:26.267 05:15:14 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:26.267 05:15:14 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 6ea303be-8d8a-4ffe-8c92-8d02fc10225d 00:16:26.267 05:15:15 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:16:26.267 05:15:15 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 4a6fc8a9-f640-4723-92b1-f38e770faa4e 00:16:26.267 05:15:15 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:16:26.267 05:15:15 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:16:26.267 05:15:15 -- host/perf.sh@114 -- # nvmftestfini 00:16:26.267 05:15:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:26.267 05:15:15 -- nvmf/common.sh@116 -- # sync 00:16:26.267 05:15:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:26.267 05:15:15 -- nvmf/common.sh@119 -- # set +e 00:16:26.267 05:15:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:26.267 05:15:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:26.267 rmmod nvme_tcp 00:16:26.267 rmmod nvme_fabrics 00:16:26.267 rmmod nvme_keyring 00:16:26.267 05:15:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:26.267 05:15:16 -- nvmf/common.sh@123 -- # set -e 00:16:26.267 05:15:16 -- nvmf/common.sh@124 -- # return 0 00:16:26.267 05:15:16 -- nvmf/common.sh@477 -- # '[' -n 80778 ']' 00:16:26.267 05:15:16 -- nvmf/common.sh@478 -- # killprocess 80778 00:16:26.267 05:15:16 -- common/autotest_common.sh@936 -- # '[' -z 80778 ']' 00:16:26.267 05:15:16 -- common/autotest_common.sh@940 -- # kill -0 80778 00:16:26.267 05:15:16 -- common/autotest_common.sh@941 -- # uname 00:16:26.267 05:15:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:26.267 05:15:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80778 00:16:26.525 killing process with pid 80778 00:16:26.525 05:15:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:26.525 05:15:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:26.525 05:15:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80778' 00:16:26.525 05:15:16 -- common/autotest_common.sh@955 -- # kill 80778 00:16:26.525 05:15:16 -- common/autotest_common.sh@960 -- # wait 80778 00:16:27.894 05:15:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:27.894 05:15:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:27.894 05:15:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:27.894 05:15:17 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:27.894 05:15:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:27.895 05:15:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:27.895 05:15:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:27.895 05:15:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:27.895 05:15:17 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:27.895 00:16:27.895 real 0m52.950s 00:16:27.895 user 3m20.278s 00:16:27.895 sys 0m13.396s 00:16:27.895 05:15:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:27.895 05:15:17 -- common/autotest_common.sh@10 -- # set +x 00:16:27.895 ************************************ 00:16:27.895 END TEST nvmf_perf 00:16:27.895 ************************************ 00:16:27.895 05:15:17 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:16:27.895 05:15:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:27.895 05:15:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:27.895 05:15:17 -- common/autotest_common.sh@10 -- # set +x 00:16:27.895 ************************************ 00:16:27.895 START TEST nvmf_fio_host 00:16:27.895 ************************************ 00:16:27.895 05:15:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:16:27.895 * Looking for test storage... 00:16:27.895 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:27.895 05:15:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:27.895 05:15:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:27.895 05:15:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:27.895 05:15:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:27.895 05:15:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:27.895 05:15:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:27.895 05:15:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:27.895 05:15:17 -- scripts/common.sh@335 -- # IFS=.-: 00:16:27.895 05:15:17 -- scripts/common.sh@335 -- # read -ra ver1 00:16:27.895 05:15:17 -- scripts/common.sh@336 -- # IFS=.-: 00:16:27.895 05:15:17 -- scripts/common.sh@336 -- # read -ra ver2 00:16:27.895 05:15:17 -- scripts/common.sh@337 -- # local 'op=<' 00:16:27.895 05:15:17 -- scripts/common.sh@339 -- # ver1_l=2 00:16:27.895 05:15:17 -- scripts/common.sh@340 -- # ver2_l=1 00:16:27.895 05:15:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:27.895 05:15:17 -- scripts/common.sh@343 -- # case "$op" in 00:16:27.895 05:15:17 -- scripts/common.sh@344 -- # : 1 00:16:27.895 05:15:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:27.895 05:15:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:27.895 05:15:17 -- scripts/common.sh@364 -- # decimal 1 00:16:27.895 05:15:17 -- scripts/common.sh@352 -- # local d=1 00:16:27.895 05:15:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:27.895 05:15:17 -- scripts/common.sh@354 -- # echo 1 00:16:27.895 05:15:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:27.895 05:15:17 -- scripts/common.sh@365 -- # decimal 2 00:16:27.895 05:15:17 -- scripts/common.sh@352 -- # local d=2 00:16:27.895 05:15:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:27.895 05:15:17 -- scripts/common.sh@354 -- # echo 2 00:16:27.895 05:15:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:27.895 05:15:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:27.895 05:15:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:27.895 05:15:17 -- scripts/common.sh@367 -- # return 0 00:16:27.895 05:15:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:27.895 05:15:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:27.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.895 --rc genhtml_branch_coverage=1 00:16:27.895 --rc genhtml_function_coverage=1 00:16:27.895 --rc genhtml_legend=1 00:16:27.895 --rc geninfo_all_blocks=1 00:16:27.895 --rc geninfo_unexecuted_blocks=1 00:16:27.895 00:16:27.895 ' 00:16:27.895 05:15:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:27.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.895 --rc genhtml_branch_coverage=1 00:16:27.895 --rc genhtml_function_coverage=1 00:16:27.895 --rc genhtml_legend=1 00:16:27.895 --rc geninfo_all_blocks=1 00:16:27.895 --rc geninfo_unexecuted_blocks=1 00:16:27.895 00:16:27.895 ' 00:16:27.895 05:15:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:27.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.895 --rc genhtml_branch_coverage=1 00:16:27.895 --rc genhtml_function_coverage=1 00:16:27.895 --rc genhtml_legend=1 00:16:27.895 --rc geninfo_all_blocks=1 00:16:27.895 --rc geninfo_unexecuted_blocks=1 00:16:27.895 00:16:27.895 ' 00:16:27.895 05:15:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:27.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.895 --rc genhtml_branch_coverage=1 00:16:27.895 --rc genhtml_function_coverage=1 00:16:27.895 --rc genhtml_legend=1 00:16:27.895 --rc geninfo_all_blocks=1 00:16:27.895 --rc geninfo_unexecuted_blocks=1 00:16:27.895 00:16:27.895 ' 00:16:27.895 05:15:17 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:27.895 05:15:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:27.895 05:15:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:27.895 05:15:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:27.895 05:15:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.895 05:15:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.895 05:15:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.895 05:15:17 -- paths/export.sh@5 -- # export PATH 00:16:27.895 05:15:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.895 05:15:17 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:27.895 05:15:17 -- nvmf/common.sh@7 -- # uname -s 00:16:27.895 05:15:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:27.895 05:15:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:27.895 05:15:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:27.895 05:15:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:27.895 05:15:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:27.895 05:15:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:27.895 05:15:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:27.895 05:15:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:27.896 05:15:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:27.896 05:15:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:27.896 05:15:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:16:27.896 05:15:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:16:27.896 05:15:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:27.896 05:15:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:27.896 05:15:17 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:27.896 05:15:17 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:27.896 05:15:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:27.896 05:15:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:27.896 05:15:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:27.896 05:15:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.896 05:15:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.896 05:15:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.896 05:15:17 -- paths/export.sh@5 -- # export PATH 00:16:27.896 05:15:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.896 05:15:17 -- nvmf/common.sh@46 -- # : 0 00:16:27.896 05:15:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:27.896 05:15:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:27.896 05:15:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:27.896 05:15:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:27.896 05:15:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:27.896 05:15:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:27.896 05:15:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:27.896 05:15:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:27.896 05:15:17 -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:27.896 05:15:17 -- host/fio.sh@14 -- # nvmftestinit 00:16:27.896 05:15:17 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:27.896 05:15:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:27.896 05:15:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:27.896 05:15:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:27.896 05:15:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:27.896 05:15:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:27.896 05:15:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:27.896 05:15:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:27.896 05:15:17 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:27.896 05:15:17 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:27.896 05:15:17 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:27.896 05:15:17 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:27.896 05:15:17 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:27.896 05:15:17 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:27.896 05:15:17 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:27.896 05:15:17 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:27.896 05:15:17 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:27.896 05:15:17 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:27.896 05:15:17 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:27.896 05:15:17 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:27.896 05:15:17 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:27.896 05:15:17 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:27.896 05:15:17 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:27.896 05:15:17 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:27.896 05:15:17 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:27.896 05:15:17 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:27.896 05:15:17 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:27.896 05:15:17 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:28.154 Cannot find device "nvmf_tgt_br" 00:16:28.154 05:15:17 -- nvmf/common.sh@154 -- # true 00:16:28.154 05:15:17 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:28.154 Cannot find device "nvmf_tgt_br2" 00:16:28.154 05:15:17 -- nvmf/common.sh@155 -- # true 00:16:28.154 05:15:17 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:28.154 05:15:17 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:28.154 Cannot find device "nvmf_tgt_br" 00:16:28.154 05:15:17 -- nvmf/common.sh@157 -- # true 00:16:28.154 05:15:17 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:28.154 Cannot find device "nvmf_tgt_br2" 00:16:28.154 05:15:17 -- nvmf/common.sh@158 -- # true 00:16:28.154 05:15:17 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:28.154 05:15:17 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:28.154 05:15:17 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:28.154 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:28.154 05:15:17 -- nvmf/common.sh@161 -- # true 00:16:28.154 05:15:17 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:28.154 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:28.154 05:15:17 -- nvmf/common.sh@162 -- # true 00:16:28.154 05:15:17 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:28.154 05:15:17 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:28.154 05:15:17 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:28.154 05:15:17 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:28.154 05:15:17 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:28.154 05:15:17 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:28.154 05:15:17 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:28.154 05:15:17 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:28.154 05:15:17 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:28.154 05:15:17 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:28.154 05:15:17 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:28.154 05:15:17 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:28.154 05:15:17 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:28.154 05:15:17 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:28.154 05:15:17 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:28.154 05:15:17 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:28.154 05:15:17 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:28.154 05:15:17 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:28.154 05:15:17 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:28.154 05:15:17 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:28.154 05:15:17 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:28.412 05:15:17 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:28.412 05:15:17 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:28.412 05:15:17 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:28.412 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:28.412 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:16:28.412 00:16:28.412 --- 10.0.0.2 ping statistics --- 00:16:28.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:28.412 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:16:28.412 05:15:17 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:28.412 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:28.412 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:16:28.412 00:16:28.412 --- 10.0.0.3 ping statistics --- 00:16:28.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:28.412 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:16:28.412 05:15:17 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:28.412 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:28.412 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:16:28.412 00:16:28.412 --- 10.0.0.1 ping statistics --- 00:16:28.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:28.412 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:16:28.412 05:15:17 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:28.412 05:15:17 -- nvmf/common.sh@421 -- # return 0 00:16:28.412 05:15:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:28.412 05:15:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:28.412 05:15:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:28.412 05:15:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:28.412 05:15:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:28.412 05:15:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:28.412 05:15:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:28.412 05:15:17 -- host/fio.sh@16 -- # [[ y != y ]] 00:16:28.412 05:15:17 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:16:28.412 05:15:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:28.412 05:15:17 -- common/autotest_common.sh@10 -- # set +x 00:16:28.412 05:15:17 -- host/fio.sh@24 -- # nvmfpid=81629 00:16:28.412 05:15:17 -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:28.412 05:15:18 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:28.412 05:15:18 -- host/fio.sh@28 -- # waitforlisten 81629 00:16:28.412 05:15:18 -- common/autotest_common.sh@829 -- # '[' -z 81629 ']' 00:16:28.412 05:15:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:28.412 05:15:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:28.412 05:15:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:28.412 05:15:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:28.412 05:15:18 -- common/autotest_common.sh@10 -- # set +x 00:16:28.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:28.412 [2024-12-08 05:15:18.047272] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:28.412 [2024-12-08 05:15:18.047368] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:28.412 [2024-12-08 05:15:18.186066] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:28.670 [2024-12-08 05:15:18.226221] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:28.670 [2024-12-08 05:15:18.226405] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:28.670 [2024-12-08 05:15:18.226421] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:28.670 [2024-12-08 05:15:18.226432] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:28.670 [2024-12-08 05:15:18.226568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:28.670 [2024-12-08 05:15:18.226735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:28.670 [2024-12-08 05:15:18.227434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:28.670 [2024-12-08 05:15:18.227460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.604 05:15:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:29.604 05:15:19 -- common/autotest_common.sh@862 -- # return 0 00:16:29.604 05:15:19 -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:29.604 [2024-12-08 05:15:19.294951] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:29.604 05:15:19 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:16:29.604 05:15:19 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:29.604 05:15:19 -- common/autotest_common.sh@10 -- # set +x 00:16:29.604 05:15:19 -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:29.862 Malloc1 00:16:29.862 05:15:19 -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:30.122 05:15:19 -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:30.690 05:15:20 -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:30.690 [2024-12-08 05:15:20.411945] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:30.690 05:15:20 -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:30.948 05:15:20 -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:16:30.948 05:15:20 -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:30.948 05:15:20 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:30.948 05:15:20 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:16:30.948 05:15:20 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:30.948 05:15:20 -- common/autotest_common.sh@1328 -- # local sanitizers 00:16:30.949 05:15:20 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:30.949 05:15:20 -- common/autotest_common.sh@1330 -- # shift 00:16:30.949 05:15:20 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:16:30.949 05:15:20 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:16:30.949 05:15:20 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:30.949 05:15:20 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:16:30.949 05:15:20 -- common/autotest_common.sh@1334 -- # grep libasan 00:16:30.949 05:15:20 -- common/autotest_common.sh@1334 -- # asan_lib= 00:16:30.949 05:15:20 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:16:30.949 05:15:20 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:16:30.949 05:15:20 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:30.949 05:15:20 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:16:30.949 05:15:20 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:16:30.949 05:15:20 -- common/autotest_common.sh@1334 -- # asan_lib= 00:16:30.949 05:15:20 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:16:30.949 05:15:20 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:30.949 05:15:20 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:31.206 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:16:31.206 fio-3.35 00:16:31.206 Starting 1 thread 00:16:33.737 00:16:33.737 test: (groupid=0, jobs=1): err= 0: pid=81708: Sun Dec 8 05:15:23 2024 00:16:33.737 read: IOPS=9203, BW=36.0MiB/s (37.7MB/s)(72.1MiB/2006msec) 00:16:33.737 slat (usec): min=2, max=392, avg= 2.75, stdev= 3.94 00:16:33.737 clat (usec): min=3737, max=12772, avg=7220.51, stdev=642.31 00:16:33.737 lat (usec): min=3770, max=12774, avg=7223.26, stdev=642.55 00:16:33.737 clat percentiles (usec): 00:16:33.737 | 1.00th=[ 5932], 5.00th=[ 6390], 10.00th=[ 6587], 20.00th=[ 6783], 00:16:33.737 | 30.00th=[ 6915], 40.00th=[ 7046], 50.00th=[ 7177], 60.00th=[ 7308], 00:16:33.737 | 70.00th=[ 7439], 80.00th=[ 7570], 90.00th=[ 7832], 95.00th=[ 8094], 00:16:33.737 | 99.00th=[10028], 99.50th=[10421], 99.90th=[11600], 99.95th=[11731], 00:16:33.737 | 99.99th=[12125] 00:16:33.737 bw ( KiB/s): min=35960, max=38016, per=99.92%, avg=36786.00, stdev=874.68, samples=4 00:16:33.737 iops : min= 8990, max= 9504, avg=9196.50, stdev=218.67, samples=4 00:16:33.737 write: IOPS=9209, BW=36.0MiB/s (37.7MB/s)(72.2MiB/2006msec); 0 zone resets 00:16:33.737 slat (usec): min=2, max=335, avg= 2.90, stdev= 2.86 00:16:33.737 clat (usec): min=3534, max=12108, avg=6610.28, stdev=582.73 00:16:33.737 lat (usec): min=3559, max=12110, avg=6613.19, stdev=583.02 00:16:33.737 clat percentiles (usec): 00:16:33.737 | 1.00th=[ 5407], 5.00th=[ 5932], 10.00th=[ 6063], 20.00th=[ 6259], 00:16:33.737 | 30.00th=[ 6325], 40.00th=[ 6456], 50.00th=[ 6521], 60.00th=[ 6652], 00:16:33.737 | 70.00th=[ 6783], 80.00th=[ 6915], 90.00th=[ 7111], 95.00th=[ 7373], 00:16:33.737 | 99.00th=[ 9110], 99.50th=[ 9765], 99.90th=[10683], 99.95th=[11207], 00:16:33.737 | 99.99th=[11994] 00:16:33.737 bw ( KiB/s): min=36544, max=37248, per=99.99%, avg=36834.00, stdev=297.64, samples=4 00:16:33.737 iops : min= 9136, max= 9312, avg=9208.50, stdev=74.41, samples=4 00:16:33.737 lat (msec) : 4=0.02%, 10=99.40%, 20=0.58% 00:16:33.737 cpu : usr=69.18%, sys=22.14%, ctx=112, majf=0, minf=5 00:16:33.737 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:33.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:33.737 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:33.737 issued rwts: total=18463,18474,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:33.737 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:33.737 00:16:33.737 Run status group 0 (all jobs): 00:16:33.737 READ: bw=36.0MiB/s (37.7MB/s), 36.0MiB/s-36.0MiB/s (37.7MB/s-37.7MB/s), io=72.1MiB (75.6MB), run=2006-2006msec 00:16:33.737 WRITE: bw=36.0MiB/s (37.7MB/s), 36.0MiB/s-36.0MiB/s (37.7MB/s-37.7MB/s), io=72.2MiB (75.7MB), run=2006-2006msec 00:16:33.737 05:15:23 -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:16:33.737 05:15:23 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:16:33.737 05:15:23 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:16:33.737 05:15:23 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:33.737 05:15:23 -- common/autotest_common.sh@1328 -- # local sanitizers 00:16:33.737 05:15:23 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:33.737 05:15:23 -- common/autotest_common.sh@1330 -- # shift 00:16:33.737 05:15:23 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:16:33.737 05:15:23 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:16:33.737 05:15:23 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:33.737 05:15:23 -- common/autotest_common.sh@1334 -- # grep libasan 00:16:33.737 05:15:23 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:16:33.737 05:15:23 -- common/autotest_common.sh@1334 -- # asan_lib= 00:16:33.737 05:15:23 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:16:33.737 05:15:23 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:16:33.737 05:15:23 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:16:33.737 05:15:23 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:33.737 05:15:23 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:16:33.737 05:15:23 -- common/autotest_common.sh@1334 -- # asan_lib= 00:16:33.737 05:15:23 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:16:33.737 05:15:23 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:33.737 05:15:23 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:16:33.737 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:16:33.737 fio-3.35 00:16:33.737 Starting 1 thread 00:16:36.278 00:16:36.278 test: (groupid=0, jobs=1): err= 0: pid=81757: Sun Dec 8 05:15:25 2024 00:16:36.278 read: IOPS=8210, BW=128MiB/s (135MB/s)(258MiB/2008msec) 00:16:36.278 slat (usec): min=3, max=118, avg= 3.96, stdev= 1.76 00:16:36.278 clat (usec): min=2061, max=23312, avg=8655.73, stdev=2768.33 00:16:36.278 lat (usec): min=2064, max=23316, avg=8659.69, stdev=2768.47 00:16:36.278 clat percentiles (usec): 00:16:36.278 | 1.00th=[ 4080], 5.00th=[ 4883], 10.00th=[ 5342], 20.00th=[ 6063], 00:16:36.278 | 30.00th=[ 6849], 40.00th=[ 7570], 50.00th=[ 8356], 60.00th=[ 9110], 00:16:36.278 | 70.00th=[10028], 80.00th=[10945], 90.00th=[12518], 95.00th=[13829], 00:16:36.278 | 99.00th=[16057], 99.50th=[16712], 99.90th=[17695], 99.95th=[17957], 00:16:36.278 | 99.99th=[18220] 00:16:36.278 bw ( KiB/s): min=58464, max=71232, per=50.45%, avg=66272.00, stdev=5955.27, samples=4 00:16:36.278 iops : min= 3654, max= 4452, avg=4142.00, stdev=372.20, samples=4 00:16:36.278 write: IOPS=4745, BW=74.2MiB/s (77.8MB/s)(136MiB/1829msec); 0 zone resets 00:16:36.278 slat (usec): min=37, max=213, avg=40.25, stdev= 5.49 00:16:36.279 clat (usec): min=5051, max=20093, avg=12424.71, stdev=2277.48 00:16:36.279 lat (usec): min=5089, max=20131, avg=12464.96, stdev=2278.63 00:16:36.279 clat percentiles (usec): 00:16:36.279 | 1.00th=[ 8160], 5.00th=[ 9241], 10.00th=[ 9765], 20.00th=[10421], 00:16:36.279 | 30.00th=[10945], 40.00th=[11600], 50.00th=[12125], 60.00th=[12780], 00:16:36.279 | 70.00th=[13566], 80.00th=[14484], 90.00th=[15795], 95.00th=[16581], 00:16:36.279 | 99.00th=[17957], 99.50th=[18482], 99.90th=[19006], 99.95th=[19530], 00:16:36.279 | 99.99th=[20055] 00:16:36.279 bw ( KiB/s): min=58784, max=76416, per=91.11%, avg=69184.00, stdev=7633.50, samples=4 00:16:36.279 iops : min= 3674, max= 4776, avg=4324.00, stdev=477.09, samples=4 00:16:36.279 lat (msec) : 4=0.53%, 10=49.59%, 20=49.86%, 50=0.01% 00:16:36.279 cpu : usr=80.97%, sys=14.15%, ctx=4, majf=0, minf=1 00:16:36.279 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:16:36.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:36.279 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:36.279 issued rwts: total=16486,8680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:36.279 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:36.279 00:16:36.279 Run status group 0 (all jobs): 00:16:36.279 READ: bw=128MiB/s (135MB/s), 128MiB/s-128MiB/s (135MB/s-135MB/s), io=258MiB (270MB), run=2008-2008msec 00:16:36.279 WRITE: bw=74.2MiB/s (77.8MB/s), 74.2MiB/s-74.2MiB/s (77.8MB/s-77.8MB/s), io=136MiB (142MB), run=1829-1829msec 00:16:36.279 05:15:25 -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:36.279 05:15:25 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:16:36.279 05:15:25 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:16:36.279 05:15:25 -- host/fio.sh@51 -- # get_nvme_bdfs 00:16:36.279 05:15:25 -- common/autotest_common.sh@1508 -- # bdfs=() 00:16:36.279 05:15:25 -- common/autotest_common.sh@1508 -- # local bdfs 00:16:36.279 05:15:25 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:16:36.279 05:15:25 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:36.279 05:15:25 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:16:36.279 05:15:26 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:16:36.279 05:15:26 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:16:36.279 05:15:26 -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 -i 10.0.0.2 00:16:36.844 Nvme0n1 00:16:36.844 05:15:26 -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:16:37.101 05:15:26 -- host/fio.sh@53 -- # ls_guid=7e8ec31d-c891-4f57-b409-34a054f703d6 00:16:37.101 05:15:26 -- host/fio.sh@54 -- # get_lvs_free_mb 7e8ec31d-c891-4f57-b409-34a054f703d6 00:16:37.101 05:15:26 -- common/autotest_common.sh@1353 -- # local lvs_uuid=7e8ec31d-c891-4f57-b409-34a054f703d6 00:16:37.101 05:15:26 -- common/autotest_common.sh@1354 -- # local lvs_info 00:16:37.101 05:15:26 -- common/autotest_common.sh@1355 -- # local fc 00:16:37.101 05:15:26 -- common/autotest_common.sh@1356 -- # local cs 00:16:37.101 05:15:26 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:37.380 05:15:27 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:16:37.380 { 00:16:37.380 "uuid": "7e8ec31d-c891-4f57-b409-34a054f703d6", 00:16:37.380 "name": "lvs_0", 00:16:37.380 "base_bdev": "Nvme0n1", 00:16:37.380 "total_data_clusters": 4, 00:16:37.380 "free_clusters": 4, 00:16:37.380 "block_size": 4096, 00:16:37.380 "cluster_size": 1073741824 00:16:37.380 } 00:16:37.380 ]' 00:16:37.380 05:15:27 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="7e8ec31d-c891-4f57-b409-34a054f703d6") .free_clusters' 00:16:37.380 05:15:27 -- common/autotest_common.sh@1358 -- # fc=4 00:16:37.380 05:15:27 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="7e8ec31d-c891-4f57-b409-34a054f703d6") .cluster_size' 00:16:37.638 4096 00:16:37.638 05:15:27 -- common/autotest_common.sh@1359 -- # cs=1073741824 00:16:37.638 05:15:27 -- common/autotest_common.sh@1362 -- # free_mb=4096 00:16:37.638 05:15:27 -- common/autotest_common.sh@1363 -- # echo 4096 00:16:37.638 05:15:27 -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:16:37.897 d6533f9a-afe2-43c6-a43c-93859e2f1017 00:16:37.897 05:15:27 -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:16:38.155 05:15:27 -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:16:38.412 05:15:28 -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:38.670 05:15:28 -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:38.670 05:15:28 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:38.670 05:15:28 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:16:38.670 05:15:28 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:38.670 05:15:28 -- common/autotest_common.sh@1328 -- # local sanitizers 00:16:38.670 05:15:28 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:38.670 05:15:28 -- common/autotest_common.sh@1330 -- # shift 00:16:38.670 05:15:28 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:16:38.670 05:15:28 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:16:38.670 05:15:28 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:38.670 05:15:28 -- common/autotest_common.sh@1334 -- # grep libasan 00:16:38.670 05:15:28 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:16:38.670 05:15:28 -- common/autotest_common.sh@1334 -- # asan_lib= 00:16:38.670 05:15:28 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:16:38.670 05:15:28 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:16:38.670 05:15:28 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:38.670 05:15:28 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:16:38.670 05:15:28 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:16:38.670 05:15:28 -- common/autotest_common.sh@1334 -- # asan_lib= 00:16:38.670 05:15:28 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:16:38.670 05:15:28 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:38.670 05:15:28 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:38.928 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:16:38.928 fio-3.35 00:16:38.928 Starting 1 thread 00:16:41.559 00:16:41.559 test: (groupid=0, jobs=1): err= 0: pid=81872: Sun Dec 8 05:15:30 2024 00:16:41.559 read: IOPS=6475, BW=25.3MiB/s (26.5MB/s)(50.8MiB/2009msec) 00:16:41.559 slat (usec): min=2, max=241, avg= 2.84, stdev= 2.94 00:16:41.559 clat (usec): min=2905, max=21226, avg=10307.73, stdev=919.41 00:16:41.559 lat (usec): min=2915, max=21229, avg=10310.57, stdev=919.21 00:16:41.559 clat percentiles (usec): 00:16:41.559 | 1.00th=[ 8455], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9634], 00:16:41.559 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10290], 60.00th=[10421], 00:16:41.559 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11338], 95.00th=[11731], 00:16:41.559 | 99.00th=[12387], 99.50th=[13042], 99.90th=[17433], 99.95th=[19268], 00:16:41.559 | 99.99th=[20317] 00:16:41.559 bw ( KiB/s): min=24792, max=26880, per=99.92%, avg=25884.00, stdev=890.47, samples=4 00:16:41.559 iops : min= 6198, max= 6720, avg=6471.00, stdev=222.62, samples=4 00:16:41.559 write: IOPS=6484, BW=25.3MiB/s (26.6MB/s)(50.9MiB/2009msec); 0 zone resets 00:16:41.559 slat (usec): min=2, max=192, avg= 2.96, stdev= 2.07 00:16:41.559 clat (usec): min=1901, max=17694, avg=9362.88, stdev=866.67 00:16:41.559 lat (usec): min=1913, max=17699, avg=9365.84, stdev=866.61 00:16:41.559 clat percentiles (usec): 00:16:41.559 | 1.00th=[ 7635], 5.00th=[ 8160], 10.00th=[ 8455], 20.00th=[ 8717], 00:16:41.559 | 30.00th=[ 8979], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9503], 00:16:41.559 | 70.00th=[ 9765], 80.00th=[10028], 90.00th=[10290], 95.00th=[10683], 00:16:41.559 | 99.00th=[11207], 99.50th=[11600], 99.90th=[16188], 99.95th=[17171], 00:16:41.559 | 99.99th=[17695] 00:16:41.559 bw ( KiB/s): min=25856, max=25992, per=100.00%, avg=25938.00, stdev=63.46, samples=4 00:16:41.559 iops : min= 6464, max= 6498, avg=6484.50, stdev=15.86, samples=4 00:16:41.559 lat (msec) : 2=0.01%, 4=0.05%, 10=58.20%, 20=41.73%, 50=0.02% 00:16:41.559 cpu : usr=71.07%, sys=22.16%, ctx=26, majf=0, minf=14 00:16:41.559 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:41.559 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:41.559 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:41.559 issued rwts: total=13010,13028,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:41.559 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:41.559 00:16:41.559 Run status group 0 (all jobs): 00:16:41.559 READ: bw=25.3MiB/s (26.5MB/s), 25.3MiB/s-25.3MiB/s (26.5MB/s-26.5MB/s), io=50.8MiB (53.3MB), run=2009-2009msec 00:16:41.559 WRITE: bw=25.3MiB/s (26.6MB/s), 25.3MiB/s-25.3MiB/s (26.6MB/s-26.6MB/s), io=50.9MiB (53.4MB), run=2009-2009msec 00:16:41.559 05:15:30 -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:16:41.559 05:15:31 -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:16:41.816 05:15:31 -- host/fio.sh@64 -- # ls_nested_guid=50e70a99-f97c-463b-ba82-fe551b455b62 00:16:41.816 05:15:31 -- host/fio.sh@65 -- # get_lvs_free_mb 50e70a99-f97c-463b-ba82-fe551b455b62 00:16:41.816 05:15:31 -- common/autotest_common.sh@1353 -- # local lvs_uuid=50e70a99-f97c-463b-ba82-fe551b455b62 00:16:41.816 05:15:31 -- common/autotest_common.sh@1354 -- # local lvs_info 00:16:41.816 05:15:31 -- common/autotest_common.sh@1355 -- # local fc 00:16:41.816 05:15:31 -- common/autotest_common.sh@1356 -- # local cs 00:16:41.816 05:15:31 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:42.075 05:15:31 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:16:42.075 { 00:16:42.075 "uuid": "7e8ec31d-c891-4f57-b409-34a054f703d6", 00:16:42.075 "name": "lvs_0", 00:16:42.075 "base_bdev": "Nvme0n1", 00:16:42.075 "total_data_clusters": 4, 00:16:42.075 "free_clusters": 0, 00:16:42.075 "block_size": 4096, 00:16:42.075 "cluster_size": 1073741824 00:16:42.075 }, 00:16:42.075 { 00:16:42.075 "uuid": "50e70a99-f97c-463b-ba82-fe551b455b62", 00:16:42.075 "name": "lvs_n_0", 00:16:42.075 "base_bdev": "d6533f9a-afe2-43c6-a43c-93859e2f1017", 00:16:42.075 "total_data_clusters": 1022, 00:16:42.075 "free_clusters": 1022, 00:16:42.075 "block_size": 4096, 00:16:42.075 "cluster_size": 4194304 00:16:42.075 } 00:16:42.075 ]' 00:16:42.075 05:15:31 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="50e70a99-f97c-463b-ba82-fe551b455b62") .free_clusters' 00:16:42.334 05:15:31 -- common/autotest_common.sh@1358 -- # fc=1022 00:16:42.334 05:15:31 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="50e70a99-f97c-463b-ba82-fe551b455b62") .cluster_size' 00:16:42.334 05:15:31 -- common/autotest_common.sh@1359 -- # cs=4194304 00:16:42.334 05:15:31 -- common/autotest_common.sh@1362 -- # free_mb=4088 00:16:42.334 4088 00:16:42.334 05:15:31 -- common/autotest_common.sh@1363 -- # echo 4088 00:16:42.334 05:15:31 -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:16:42.593 ef29d4ad-7e85-4fe7-b774-3d32e34f30ca 00:16:42.593 05:15:32 -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:16:42.851 05:15:32 -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:16:43.109 05:15:32 -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:16:43.367 05:15:33 -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:43.367 05:15:33 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:43.367 05:15:33 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:16:43.367 05:15:33 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:43.367 05:15:33 -- common/autotest_common.sh@1328 -- # local sanitizers 00:16:43.367 05:15:33 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:43.368 05:15:33 -- common/autotest_common.sh@1330 -- # shift 00:16:43.368 05:15:33 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:16:43.368 05:15:33 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:16:43.368 05:15:33 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:43.368 05:15:33 -- common/autotest_common.sh@1334 -- # grep libasan 00:16:43.368 05:15:33 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:16:43.368 05:15:33 -- common/autotest_common.sh@1334 -- # asan_lib= 00:16:43.368 05:15:33 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:16:43.368 05:15:33 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:16:43.368 05:15:33 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:43.368 05:15:33 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:16:43.368 05:15:33 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:16:43.368 05:15:33 -- common/autotest_common.sh@1334 -- # asan_lib= 00:16:43.368 05:15:33 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:16:43.368 05:15:33 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:43.368 05:15:33 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:43.625 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:16:43.625 fio-3.35 00:16:43.625 Starting 1 thread 00:16:46.154 00:16:46.154 test: (groupid=0, jobs=1): err= 0: pid=81950: Sun Dec 8 05:15:35 2024 00:16:46.154 read: IOPS=5683, BW=22.2MiB/s (23.3MB/s)(44.6MiB/2010msec) 00:16:46.154 slat (usec): min=2, max=238, avg= 2.94, stdev= 2.91 00:16:46.154 clat (usec): min=3191, max=21371, avg=11783.43, stdev=1176.15 00:16:46.154 lat (usec): min=3196, max=21374, avg=11786.37, stdev=1176.02 00:16:46.154 clat percentiles (usec): 00:16:46.154 | 1.00th=[ 9503], 5.00th=[10290], 10.00th=[10552], 20.00th=[10945], 00:16:46.154 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11731], 60.00th=[11994], 00:16:46.154 | 70.00th=[12256], 80.00th=[12518], 90.00th=[13042], 95.00th=[13435], 00:16:46.154 | 99.00th=[15664], 99.50th=[17171], 99.90th=[20317], 99.95th=[20579], 00:16:46.154 | 99.99th=[21103] 00:16:46.154 bw ( KiB/s): min=21320, max=23440, per=100.00%, avg=22736.00, stdev=988.96, samples=4 00:16:46.154 iops : min= 5330, max= 5860, avg=5684.00, stdev=247.24, samples=4 00:16:46.154 write: IOPS=5659, BW=22.1MiB/s (23.2MB/s)(44.4MiB/2010msec); 0 zone resets 00:16:46.154 slat (usec): min=2, max=159, avg= 3.06, stdev= 1.87 00:16:46.154 clat (usec): min=1929, max=20460, avg=10681.66, stdev=1093.00 00:16:46.154 lat (usec): min=1937, max=20464, avg=10684.71, stdev=1093.01 00:16:46.154 clat percentiles (usec): 00:16:46.154 | 1.00th=[ 8586], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9896], 00:16:46.154 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10683], 60.00th=[10814], 00:16:46.154 | 70.00th=[11076], 80.00th=[11338], 90.00th=[11863], 95.00th=[12256], 00:16:46.154 | 99.00th=[13829], 99.50th=[15664], 99.90th=[19268], 99.95th=[20055], 00:16:46.154 | 99.99th=[20317] 00:16:46.154 bw ( KiB/s): min=22088, max=23312, per=99.87%, avg=22610.00, stdev=546.71, samples=4 00:16:46.154 iops : min= 5522, max= 5828, avg=5652.50, stdev=136.68, samples=4 00:16:46.154 lat (msec) : 2=0.01%, 4=0.05%, 10=13.20%, 20=86.64%, 50=0.10% 00:16:46.154 cpu : usr=73.67%, sys=20.06%, ctx=3, majf=0, minf=14 00:16:46.154 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:16:46.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:46.154 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:46.154 issued rwts: total=11424,11376,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:46.154 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:46.154 00:16:46.154 Run status group 0 (all jobs): 00:16:46.154 READ: bw=22.2MiB/s (23.3MB/s), 22.2MiB/s-22.2MiB/s (23.3MB/s-23.3MB/s), io=44.6MiB (46.8MB), run=2010-2010msec 00:16:46.154 WRITE: bw=22.1MiB/s (23.2MB/s), 22.1MiB/s-22.1MiB/s (23.2MB/s-23.2MB/s), io=44.4MiB (46.6MB), run=2010-2010msec 00:16:46.154 05:15:35 -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:16:46.154 05:15:35 -- host/fio.sh@74 -- # sync 00:16:46.154 05:15:35 -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:16:46.720 05:15:36 -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:16:47.051 05:15:36 -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:16:47.325 05:15:36 -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:16:47.583 05:15:37 -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:16:48.149 05:15:37 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:16:48.149 05:15:37 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:16:48.149 05:15:37 -- host/fio.sh@86 -- # nvmftestfini 00:16:48.149 05:15:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:48.149 05:15:37 -- nvmf/common.sh@116 -- # sync 00:16:48.149 05:15:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:48.149 05:15:37 -- nvmf/common.sh@119 -- # set +e 00:16:48.149 05:15:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:48.149 05:15:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:48.149 rmmod nvme_tcp 00:16:48.149 rmmod nvme_fabrics 00:16:48.149 rmmod nvme_keyring 00:16:48.149 05:15:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:48.149 05:15:37 -- nvmf/common.sh@123 -- # set -e 00:16:48.149 05:15:37 -- nvmf/common.sh@124 -- # return 0 00:16:48.149 05:15:37 -- nvmf/common.sh@477 -- # '[' -n 81629 ']' 00:16:48.149 05:15:37 -- nvmf/common.sh@478 -- # killprocess 81629 00:16:48.149 05:15:37 -- common/autotest_common.sh@936 -- # '[' -z 81629 ']' 00:16:48.149 05:15:37 -- common/autotest_common.sh@940 -- # kill -0 81629 00:16:48.149 05:15:37 -- common/autotest_common.sh@941 -- # uname 00:16:48.149 05:15:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:48.149 05:15:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81629 00:16:48.149 05:15:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:48.149 killing process with pid 81629 00:16:48.149 05:15:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:48.149 05:15:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81629' 00:16:48.149 05:15:37 -- common/autotest_common.sh@955 -- # kill 81629 00:16:48.149 05:15:37 -- common/autotest_common.sh@960 -- # wait 81629 00:16:48.409 05:15:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:48.409 05:15:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:48.409 05:15:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:48.409 05:15:37 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:48.409 05:15:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:48.409 05:15:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:48.409 05:15:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:48.409 05:15:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:48.409 05:15:38 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:48.409 ************************************ 00:16:48.409 END TEST nvmf_fio_host 00:16:48.409 ************************************ 00:16:48.409 00:16:48.409 real 0m20.588s 00:16:48.409 user 1m31.293s 00:16:48.409 sys 0m4.378s 00:16:48.409 05:15:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:48.409 05:15:38 -- common/autotest_common.sh@10 -- # set +x 00:16:48.409 05:15:38 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:16:48.409 05:15:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:48.409 05:15:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:48.409 05:15:38 -- common/autotest_common.sh@10 -- # set +x 00:16:48.409 ************************************ 00:16:48.409 START TEST nvmf_failover 00:16:48.409 ************************************ 00:16:48.409 05:15:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:16:48.409 * Looking for test storage... 00:16:48.409 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:48.409 05:15:38 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:48.409 05:15:38 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:48.409 05:15:38 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:48.668 05:15:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:48.668 05:15:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:48.668 05:15:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:48.668 05:15:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:48.668 05:15:38 -- scripts/common.sh@335 -- # IFS=.-: 00:16:48.668 05:15:38 -- scripts/common.sh@335 -- # read -ra ver1 00:16:48.668 05:15:38 -- scripts/common.sh@336 -- # IFS=.-: 00:16:48.668 05:15:38 -- scripts/common.sh@336 -- # read -ra ver2 00:16:48.668 05:15:38 -- scripts/common.sh@337 -- # local 'op=<' 00:16:48.668 05:15:38 -- scripts/common.sh@339 -- # ver1_l=2 00:16:48.668 05:15:38 -- scripts/common.sh@340 -- # ver2_l=1 00:16:48.668 05:15:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:48.668 05:15:38 -- scripts/common.sh@343 -- # case "$op" in 00:16:48.668 05:15:38 -- scripts/common.sh@344 -- # : 1 00:16:48.668 05:15:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:48.668 05:15:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:48.668 05:15:38 -- scripts/common.sh@364 -- # decimal 1 00:16:48.668 05:15:38 -- scripts/common.sh@352 -- # local d=1 00:16:48.668 05:15:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:48.668 05:15:38 -- scripts/common.sh@354 -- # echo 1 00:16:48.668 05:15:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:48.668 05:15:38 -- scripts/common.sh@365 -- # decimal 2 00:16:48.668 05:15:38 -- scripts/common.sh@352 -- # local d=2 00:16:48.668 05:15:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:48.668 05:15:38 -- scripts/common.sh@354 -- # echo 2 00:16:48.668 05:15:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:48.668 05:15:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:48.668 05:15:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:48.668 05:15:38 -- scripts/common.sh@367 -- # return 0 00:16:48.668 05:15:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:48.668 05:15:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:48.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.668 --rc genhtml_branch_coverage=1 00:16:48.668 --rc genhtml_function_coverage=1 00:16:48.668 --rc genhtml_legend=1 00:16:48.668 --rc geninfo_all_blocks=1 00:16:48.668 --rc geninfo_unexecuted_blocks=1 00:16:48.668 00:16:48.668 ' 00:16:48.668 05:15:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:48.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.668 --rc genhtml_branch_coverage=1 00:16:48.668 --rc genhtml_function_coverage=1 00:16:48.668 --rc genhtml_legend=1 00:16:48.668 --rc geninfo_all_blocks=1 00:16:48.668 --rc geninfo_unexecuted_blocks=1 00:16:48.668 00:16:48.668 ' 00:16:48.668 05:15:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:48.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.668 --rc genhtml_branch_coverage=1 00:16:48.668 --rc genhtml_function_coverage=1 00:16:48.668 --rc genhtml_legend=1 00:16:48.668 --rc geninfo_all_blocks=1 00:16:48.668 --rc geninfo_unexecuted_blocks=1 00:16:48.668 00:16:48.668 ' 00:16:48.668 05:15:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:48.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.668 --rc genhtml_branch_coverage=1 00:16:48.668 --rc genhtml_function_coverage=1 00:16:48.668 --rc genhtml_legend=1 00:16:48.668 --rc geninfo_all_blocks=1 00:16:48.668 --rc geninfo_unexecuted_blocks=1 00:16:48.668 00:16:48.668 ' 00:16:48.668 05:15:38 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:48.668 05:15:38 -- nvmf/common.sh@7 -- # uname -s 00:16:48.668 05:15:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:48.668 05:15:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:48.668 05:15:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:48.668 05:15:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:48.668 05:15:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:48.668 05:15:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:48.668 05:15:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:48.669 05:15:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:48.669 05:15:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:48.669 05:15:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:48.669 05:15:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:16:48.669 05:15:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:16:48.669 05:15:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:48.669 05:15:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:48.669 05:15:38 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:48.669 05:15:38 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:48.669 05:15:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:48.669 05:15:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:48.669 05:15:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:48.669 05:15:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.669 05:15:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.669 05:15:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.669 05:15:38 -- paths/export.sh@5 -- # export PATH 00:16:48.669 05:15:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.669 05:15:38 -- nvmf/common.sh@46 -- # : 0 00:16:48.669 05:15:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:48.669 05:15:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:48.669 05:15:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:48.669 05:15:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:48.669 05:15:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:48.669 05:15:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:48.669 05:15:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:48.669 05:15:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:48.669 05:15:38 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:48.669 05:15:38 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:48.669 05:15:38 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:48.669 05:15:38 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:48.669 05:15:38 -- host/failover.sh@18 -- # nvmftestinit 00:16:48.669 05:15:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:48.669 05:15:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:48.669 05:15:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:48.669 05:15:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:48.669 05:15:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:48.669 05:15:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:48.669 05:15:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:48.669 05:15:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:48.669 05:15:38 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:48.669 05:15:38 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:48.669 05:15:38 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:48.669 05:15:38 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:48.669 05:15:38 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:48.669 05:15:38 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:48.669 05:15:38 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:48.669 05:15:38 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:48.669 05:15:38 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:48.669 05:15:38 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:48.669 05:15:38 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:48.669 05:15:38 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:48.669 05:15:38 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:48.669 05:15:38 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:48.669 05:15:38 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:48.669 05:15:38 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:48.669 05:15:38 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:48.669 05:15:38 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:48.669 05:15:38 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:48.669 05:15:38 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:48.669 Cannot find device "nvmf_tgt_br" 00:16:48.669 05:15:38 -- nvmf/common.sh@154 -- # true 00:16:48.669 05:15:38 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:48.669 Cannot find device "nvmf_tgt_br2" 00:16:48.669 05:15:38 -- nvmf/common.sh@155 -- # true 00:16:48.669 05:15:38 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:48.669 05:15:38 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:48.669 Cannot find device "nvmf_tgt_br" 00:16:48.669 05:15:38 -- nvmf/common.sh@157 -- # true 00:16:48.669 05:15:38 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:48.669 Cannot find device "nvmf_tgt_br2" 00:16:48.669 05:15:38 -- nvmf/common.sh@158 -- # true 00:16:48.669 05:15:38 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:48.669 05:15:38 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:48.669 05:15:38 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:48.669 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:48.669 05:15:38 -- nvmf/common.sh@161 -- # true 00:16:48.669 05:15:38 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:48.669 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:48.669 05:15:38 -- nvmf/common.sh@162 -- # true 00:16:48.669 05:15:38 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:48.669 05:15:38 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:48.669 05:15:38 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:48.669 05:15:38 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:48.669 05:15:38 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:48.669 05:15:38 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:48.669 05:15:38 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:48.669 05:15:38 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:48.669 05:15:38 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:48.669 05:15:38 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:48.669 05:15:38 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:48.669 05:15:38 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:48.669 05:15:38 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:48.669 05:15:38 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:48.927 05:15:38 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:48.927 05:15:38 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:48.927 05:15:38 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:48.927 05:15:38 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:48.927 05:15:38 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:48.927 05:15:38 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:48.927 05:15:38 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:48.927 05:15:38 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:48.927 05:15:38 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:48.927 05:15:38 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:48.927 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:48.927 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:16:48.927 00:16:48.927 --- 10.0.0.2 ping statistics --- 00:16:48.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.927 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:16:48.927 05:15:38 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:48.927 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:48.927 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:16:48.927 00:16:48.927 --- 10.0.0.3 ping statistics --- 00:16:48.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.927 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:16:48.927 05:15:38 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:48.927 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:48.927 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:16:48.927 00:16:48.927 --- 10.0.0.1 ping statistics --- 00:16:48.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.927 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:16:48.927 05:15:38 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:48.927 05:15:38 -- nvmf/common.sh@421 -- # return 0 00:16:48.927 05:15:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:48.927 05:15:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:48.927 05:15:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:48.927 05:15:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:48.927 05:15:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:48.927 05:15:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:48.927 05:15:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:48.927 05:15:38 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:16:48.927 05:15:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:48.927 05:15:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:48.927 05:15:38 -- common/autotest_common.sh@10 -- # set +x 00:16:48.927 05:15:38 -- nvmf/common.sh@469 -- # nvmfpid=82197 00:16:48.927 05:15:38 -- nvmf/common.sh@470 -- # waitforlisten 82197 00:16:48.927 05:15:38 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:48.927 05:15:38 -- common/autotest_common.sh@829 -- # '[' -z 82197 ']' 00:16:48.927 05:15:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.927 05:15:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:48.927 05:15:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.927 05:15:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:48.927 05:15:38 -- common/autotest_common.sh@10 -- # set +x 00:16:48.927 [2024-12-08 05:15:38.600062] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:48.927 [2024-12-08 05:15:38.600147] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:49.186 [2024-12-08 05:15:38.757754] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:49.186 [2024-12-08 05:15:38.803032] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:49.186 [2024-12-08 05:15:38.803634] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:49.186 [2024-12-08 05:15:38.803879] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:49.186 [2024-12-08 05:15:38.804154] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:49.186 [2024-12-08 05:15:38.804519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:49.186 [2024-12-08 05:15:38.804573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:49.186 [2024-12-08 05:15:38.804577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:50.120 05:15:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:50.120 05:15:39 -- common/autotest_common.sh@862 -- # return 0 00:16:50.120 05:15:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:50.120 05:15:39 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:50.120 05:15:39 -- common/autotest_common.sh@10 -- # set +x 00:16:50.120 05:15:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:50.120 05:15:39 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:50.378 [2024-12-08 05:15:39.918507] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:50.378 05:15:39 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:50.636 Malloc0 00:16:50.636 05:15:40 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:50.893 05:15:40 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:51.151 05:15:40 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:51.409 [2024-12-08 05:15:41.136756] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:51.409 05:15:41 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:51.975 [2024-12-08 05:15:41.461153] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:51.975 05:15:41 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:16:52.234 [2024-12-08 05:15:41.769443] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:16:52.234 05:15:41 -- host/failover.sh@31 -- # bdevperf_pid=82266 00:16:52.234 05:15:41 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:16:52.234 05:15:41 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:52.234 05:15:41 -- host/failover.sh@34 -- # waitforlisten 82266 /var/tmp/bdevperf.sock 00:16:52.234 05:15:41 -- common/autotest_common.sh@829 -- # '[' -z 82266 ']' 00:16:52.234 05:15:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:52.234 05:15:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:52.234 05:15:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:52.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:52.234 05:15:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:52.234 05:15:41 -- common/autotest_common.sh@10 -- # set +x 00:16:53.606 05:15:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:53.606 05:15:42 -- common/autotest_common.sh@862 -- # return 0 00:16:53.606 05:15:42 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:53.606 NVMe0n1 00:16:53.606 05:15:43 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:53.864 00:16:53.864 05:15:43 -- host/failover.sh@39 -- # run_test_pid=82288 00:16:53.864 05:15:43 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:53.864 05:15:43 -- host/failover.sh@41 -- # sleep 1 00:16:55.238 05:15:44 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:55.238 [2024-12-08 05:15:44.906495] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64e240 is same with the state(5) to be set 00:16:55.238 [2024-12-08 05:15:44.906555] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64e240 is same with the state(5) to be set 00:16:55.238 [2024-12-08 05:15:44.906567] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64e240 is same with the state(5) to be set 00:16:55.238 [2024-12-08 05:15:44.906576] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64e240 is same with the state(5) to be set 00:16:55.238 [2024-12-08 05:15:44.906585] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64e240 is same with the state(5) to be set 00:16:55.238 [2024-12-08 05:15:44.906593] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64e240 is same with the state(5) to be set 00:16:55.238 [2024-12-08 05:15:44.906601] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64e240 is same with the state(5) to be set 00:16:55.239 [2024-12-08 05:15:44.906609] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64e240 is same with the state(5) to be set 00:16:55.239 [2024-12-08 05:15:44.906617] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64e240 is same with the state(5) to be set 00:16:55.239 [2024-12-08 05:15:44.906625] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64e240 is same with the state(5) to be set 00:16:55.239 [2024-12-08 05:15:44.906634] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64e240 is same with the state(5) to be set 00:16:55.239 [2024-12-08 05:15:44.906642] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64e240 is same with the state(5) to be set 00:16:55.239 [2024-12-08 05:15:44.906650] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64e240 is same with the state(5) to be set 00:16:55.239 [2024-12-08 05:15:44.906657] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64e240 is same with the state(5) to be set 00:16:55.239 [2024-12-08 05:15:44.906666] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64e240 is same with the state(5) to be set 00:16:55.239 [2024-12-08 05:15:44.906688] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64e240 is same with the state(5) to be set 00:16:55.239 [2024-12-08 05:15:44.906697] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64e240 is same with the state(5) to be set 00:16:55.239 05:15:44 -- host/failover.sh@45 -- # sleep 3 00:16:58.522 05:15:47 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:58.779 00:16:58.779 05:15:48 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:59.037 [2024-12-08 05:15:48.565381] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ee50 is same with the state(5) to be set 00:16:59.037 [2024-12-08 05:15:48.565434] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ee50 is same with the state(5) to be set 00:16:59.037 [2024-12-08 05:15:48.565446] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ee50 is same with the state(5) to be set 00:16:59.037 [2024-12-08 05:15:48.565456] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ee50 is same with the state(5) to be set 00:16:59.037 [2024-12-08 05:15:48.565464] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ee50 is same with the state(5) to be set 00:16:59.037 [2024-12-08 05:15:48.565473] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ee50 is same with the state(5) to be set 00:16:59.037 [2024-12-08 05:15:48.565481] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ee50 is same with the state(5) to be set 00:16:59.037 [2024-12-08 05:15:48.565489] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ee50 is same with the state(5) to be set 00:16:59.037 [2024-12-08 05:15:48.565498] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ee50 is same with the state(5) to be set 00:16:59.037 [2024-12-08 05:15:48.565509] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ee50 is same with the state(5) to be set 00:16:59.037 [2024-12-08 05:15:48.565523] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ee50 is same with the state(5) to be set 00:16:59.037 [2024-12-08 05:15:48.565536] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ee50 is same with the state(5) to be set 00:16:59.037 [2024-12-08 05:15:48.565550] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ee50 is same with the state(5) to be set 00:16:59.037 [2024-12-08 05:15:48.565564] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ee50 is same with the state(5) to be set 00:16:59.037 [2024-12-08 05:15:48.565577] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ee50 is same with the state(5) to be set 00:16:59.037 [2024-12-08 05:15:48.565589] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ee50 is same with the state(5) to be set 00:16:59.037 [2024-12-08 05:15:48.565597] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ee50 is same with the state(5) to be set 00:16:59.037 [2024-12-08 05:15:48.565606] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ee50 is same with the state(5) to be set 00:16:59.037 [2024-12-08 05:15:48.565615] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ee50 is same with the state(5) to be set 00:16:59.037 [2024-12-08 05:15:48.565623] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ee50 is same with the state(5) to be set 00:16:59.037 [2024-12-08 05:15:48.565631] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ee50 is same with the state(5) to be set 00:16:59.037 [2024-12-08 05:15:48.565640] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ee50 is same with the state(5) to be set 00:16:59.037 [2024-12-08 05:15:48.565648] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ee50 is same with the state(5) to be set 00:16:59.037 [2024-12-08 05:15:48.565657] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ee50 is same with the state(5) to be set 00:16:59.037 [2024-12-08 05:15:48.565665] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ee50 is same with the state(5) to be set 00:16:59.037 [2024-12-08 05:15:48.565690] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ee50 is same with the state(5) to be set 00:16:59.037 [2024-12-08 05:15:48.565701] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ee50 is same with the state(5) to be set 00:16:59.037 [2024-12-08 05:15:48.565709] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ee50 is same with the state(5) to be set 00:16:59.037 [2024-12-08 05:15:48.565723] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ee50 is same with the state(5) to be set 00:16:59.037 [2024-12-08 05:15:48.565731] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ee50 is same with the state(5) to be set 00:16:59.037 [2024-12-08 05:15:48.565740] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ee50 is same with the state(5) to be set 00:16:59.037 [2024-12-08 05:15:48.565749] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ee50 is same with the state(5) to be set 00:16:59.037 [2024-12-08 05:15:48.565757] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ee50 is same with the state(5) to be set 00:16:59.037 [2024-12-08 05:15:48.565765] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ee50 is same with the state(5) to be set 00:16:59.037 [2024-12-08 05:15:48.565774] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ee50 is same with the state(5) to be set 00:16:59.037 05:15:48 -- host/failover.sh@50 -- # sleep 3 00:17:02.317 05:15:51 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:02.317 [2024-12-08 05:15:51.927176] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:02.317 05:15:51 -- host/failover.sh@55 -- # sleep 1 00:17:03.252 05:15:52 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:17:03.510 [2024-12-08 05:15:53.262992] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2550 is same with the state(5) to be set 00:17:03.510 [2024-12-08 05:15:53.263055] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2550 is same with the state(5) to be set 00:17:03.510 [2024-12-08 05:15:53.263067] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2550 is same with the state(5) to be set 00:17:03.510 [2024-12-08 05:15:53.263076] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2550 is same with the state(5) to be set 00:17:03.510 [2024-12-08 05:15:53.263084] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2550 is same with the state(5) to be set 00:17:03.510 [2024-12-08 05:15:53.263092] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2550 is same with the state(5) to be set 00:17:03.510 [2024-12-08 05:15:53.263101] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2550 is same with the state(5) to be set 00:17:03.510 [2024-12-08 05:15:53.263109] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2550 is same with the state(5) to be set 00:17:03.510 [2024-12-08 05:15:53.263117] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2550 is same with the state(5) to be set 00:17:03.510 [2024-12-08 05:15:53.263126] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2550 is same with the state(5) to be set 00:17:03.510 05:15:53 -- host/failover.sh@59 -- # wait 82288 00:17:10.159 0 00:17:10.159 05:15:58 -- host/failover.sh@61 -- # killprocess 82266 00:17:10.159 05:15:58 -- common/autotest_common.sh@936 -- # '[' -z 82266 ']' 00:17:10.159 05:15:58 -- common/autotest_common.sh@940 -- # kill -0 82266 00:17:10.159 05:15:58 -- common/autotest_common.sh@941 -- # uname 00:17:10.159 05:15:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:10.159 05:15:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82266 00:17:10.159 killing process with pid 82266 00:17:10.159 05:15:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:10.159 05:15:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:10.159 05:15:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82266' 00:17:10.159 05:15:58 -- common/autotest_common.sh@955 -- # kill 82266 00:17:10.159 05:15:58 -- common/autotest_common.sh@960 -- # wait 82266 00:17:10.159 05:15:58 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:10.159 [2024-12-08 05:15:41.843329] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:10.159 [2024-12-08 05:15:41.843474] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82266 ] 00:17:10.159 [2024-12-08 05:15:41.984392] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:10.159 [2024-12-08 05:15:42.019779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.159 Running I/O for 15 seconds... 00:17:10.159 [2024-12-08 05:15:44.906769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:112912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.159 [2024-12-08 05:15:44.906827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.159 [2024-12-08 05:15:44.906855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:112920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.159 [2024-12-08 05:15:44.906872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.159 [2024-12-08 05:15:44.906889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:112928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.160 [2024-12-08 05:15:44.906904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.160 [2024-12-08 05:15:44.906921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:112944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.160 [2024-12-08 05:15:44.906935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.160 [2024-12-08 05:15:44.906951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:112968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.160 [2024-12-08 05:15:44.906965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.160 [2024-12-08 05:15:44.906981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:112976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.160 [2024-12-08 05:15:44.906995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.160 [2024-12-08 05:15:44.907013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:113000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.160 [2024-12-08 05:15:44.907027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.160 [2024-12-08 05:15:44.907043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:113008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.160 [2024-12-08 05:15:44.907056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.160 [2024-12-08 05:15:44.907072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:113024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.160 [2024-12-08 05:15:44.907087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.160 [2024-12-08 05:15:44.907103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:112328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.160 [2024-12-08 05:15:44.907117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.160 [2024-12-08 05:15:44.907133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:112344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.160 [2024-12-08 05:15:44.907148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.160 [2024-12-08 05:15:44.907187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:112368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.160 [2024-12-08 05:15:44.907203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.160 [2024-12-08 05:15:44.907220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:112384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.160 [2024-12-08 05:15:44.907236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.160 [2024-12-08 05:15:44.907252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:112392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.160 [2024-12-08 05:15:44.907266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.160 [2024-12-08 05:15:44.907282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:112400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.160 [2024-12-08 05:15:44.907297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.160 [2024-12-08 05:15:44.907313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:112408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.160 [2024-12-08 05:15:44.907328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.160 [2024-12-08 05:15:44.907345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:112416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.160 [2024-12-08 05:15:44.907363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.160 [2024-12-08 05:15:44.907380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:113032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.160 [2024-12-08 05:15:44.907409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.160 [2024-12-08 05:15:44.907430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:113040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.160 [2024-12-08 05:15:44.907444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.160 [2024-12-08 05:15:44.907460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:113048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.160 [2024-12-08 05:15:44.907475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.160 [2024-12-08 05:15:44.907491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:113056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.160 [2024-12-08 05:15:44.907505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.160 [2024-12-08 05:15:44.907521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.160 [2024-12-08 05:15:44.907535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.160 [2024-12-08 05:15:44.907551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:112448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.160 [2024-12-08 05:15:44.907565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.160 [2024-12-08 05:15:44.907581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:112456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.160 [2024-12-08 05:15:44.907604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.160 [2024-12-08 05:15:44.907621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:112472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.160 [2024-12-08 05:15:44.907635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.160 [2024-12-08 05:15:44.907651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:112488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.160 [2024-12-08 05:15:44.907665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.160 [2024-12-08 05:15:44.907698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:112528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.160 [2024-12-08 05:15:44.907713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.160 [2024-12-08 05:15:44.907729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:112552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.160 [2024-12-08 05:15:44.907744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.160 [2024-12-08 05:15:44.907760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:112568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.160 [2024-12-08 05:15:44.907774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.160 [2024-12-08 05:15:44.907790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:113064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.160 [2024-12-08 05:15:44.907804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.160 [2024-12-08 05:15:44.907820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:113072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.160 [2024-12-08 05:15:44.907834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.160 [2024-12-08 05:15:44.907850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:113080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.160 [2024-12-08 05:15:44.907875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.160 [2024-12-08 05:15:44.907891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:113088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.160 [2024-12-08 05:15:44.907907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.160 [2024-12-08 05:15:44.907924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:113096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.160 [2024-12-08 05:15:44.907938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.160 [2024-12-08 05:15:44.907954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:113104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.160 [2024-12-08 05:15:44.907968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.160 [2024-12-08 05:15:44.907985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.160 [2024-12-08 05:15:44.908000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.160 [2024-12-08 05:15:44.908024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:113120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.160 [2024-12-08 05:15:44.908043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.160 [2024-12-08 05:15:44.908072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:113128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.160 [2024-12-08 05:15:44.908097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.160 [2024-12-08 05:15:44.908115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:113136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.160 [2024-12-08 05:15:44.908130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.160 [2024-12-08 05:15:44.908146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:113144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.160 [2024-12-08 05:15:44.908160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.160 [2024-12-08 05:15:44.908176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:113152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.160 [2024-12-08 05:15:44.908190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.160 [2024-12-08 05:15:44.908206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:113160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.160 [2024-12-08 05:15:44.908220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.161 [2024-12-08 05:15:44.908236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:113168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.161 [2024-12-08 05:15:44.908250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.161 [2024-12-08 05:15:44.908266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:113176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.161 [2024-12-08 05:15:44.908280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.161 [2024-12-08 05:15:44.908296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:113184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.161 [2024-12-08 05:15:44.908310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.161 [2024-12-08 05:15:44.908326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:113192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.161 [2024-12-08 05:15:44.908340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.161 [2024-12-08 05:15:44.908357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:113200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.161 [2024-12-08 05:15:44.908371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.161 [2024-12-08 05:15:44.908387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.161 [2024-12-08 05:15:44.908401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.161 [2024-12-08 05:15:44.908417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:113216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.161 [2024-12-08 05:15:44.908441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.161 [2024-12-08 05:15:44.908460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:113224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.161 [2024-12-08 05:15:44.908474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.161 [2024-12-08 05:15:44.908490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:113232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.161 [2024-12-08 05:15:44.908505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.161 [2024-12-08 05:15:44.908521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:113240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.161 [2024-12-08 05:15:44.908535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.161 [2024-12-08 05:15:44.908551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:113248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.161 [2024-12-08 05:15:44.908565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.161 [2024-12-08 05:15:44.908581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:113256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.161 [2024-12-08 05:15:44.908595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.161 [2024-12-08 05:15:44.908611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:113264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.161 [2024-12-08 05:15:44.908626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.161 [2024-12-08 05:15:44.908642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:112584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.161 [2024-12-08 05:15:44.908656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.161 [2024-12-08 05:15:44.908685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:112592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.161 [2024-12-08 05:15:44.908704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.161 [2024-12-08 05:15:44.908720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.161 [2024-12-08 05:15:44.908735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.161 [2024-12-08 05:15:44.908751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:112608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.161 [2024-12-08 05:15:44.908766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.161 [2024-12-08 05:15:44.908782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.161 [2024-12-08 05:15:44.908796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.161 [2024-12-08 05:15:44.908812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:112624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.161 [2024-12-08 05:15:44.908827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.161 [2024-12-08 05:15:44.908843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:112656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.161 [2024-12-08 05:15:44.908865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.161 [2024-12-08 05:15:44.908882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:112688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.161 [2024-12-08 05:15:44.908897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.161 [2024-12-08 05:15:44.908913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:113272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.161 [2024-12-08 05:15:44.908928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.161 [2024-12-08 05:15:44.908944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:113280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.161 [2024-12-08 05:15:44.908960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.161 [2024-12-08 05:15:44.908977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:113288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.161 [2024-12-08 05:15:44.908992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.161 [2024-12-08 05:15:44.909008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:113296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.161 [2024-12-08 05:15:44.909022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.161 [2024-12-08 05:15:44.909039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:113304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.161 [2024-12-08 05:15:44.909053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.161 [2024-12-08 05:15:44.909069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:113312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.161 [2024-12-08 05:15:44.909083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.161 [2024-12-08 05:15:44.909099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:113320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.161 [2024-12-08 05:15:44.909114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.161 [2024-12-08 05:15:44.909130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:113328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.161 [2024-12-08 05:15:44.909145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.161 [2024-12-08 05:15:44.909161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:113336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.161 [2024-12-08 05:15:44.909175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.161 [2024-12-08 05:15:44.909191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:113344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.161 [2024-12-08 05:15:44.909205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.161 [2024-12-08 05:15:44.909221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:113352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.161 [2024-12-08 05:15:44.909236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.161 [2024-12-08 05:15:44.909258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:113360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.161 [2024-12-08 05:15:44.909273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.161 [2024-12-08 05:15:44.909290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:113368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.161 [2024-12-08 05:15:44.909304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.161 [2024-12-08 05:15:44.909320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:113376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.161 [2024-12-08 05:15:44.909334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.161 [2024-12-08 05:15:44.909350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:113384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.161 [2024-12-08 05:15:44.909365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.161 [2024-12-08 05:15:44.909381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:112696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.161 [2024-12-08 05:15:44.909395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.161 [2024-12-08 05:15:44.909411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:112704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.161 [2024-12-08 05:15:44.909424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.161 [2024-12-08 05:15:44.909441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:112712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.161 [2024-12-08 05:15:44.909457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.161 [2024-12-08 05:15:44.909474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:112720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.162 [2024-12-08 05:15:44.909488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.162 [2024-12-08 05:15:44.909504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:112744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.162 [2024-12-08 05:15:44.909518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.162 [2024-12-08 05:15:44.909534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:112752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.162 [2024-12-08 05:15:44.909548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.162 [2024-12-08 05:15:44.909564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:112760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.162 [2024-12-08 05:15:44.909578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.162 [2024-12-08 05:15:44.909594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:112768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.162 [2024-12-08 05:15:44.909610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.162 [2024-12-08 05:15:44.909626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.162 [2024-12-08 05:15:44.909647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.162 [2024-12-08 05:15:44.909665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:113400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.162 [2024-12-08 05:15:44.909692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.162 [2024-12-08 05:15:44.909710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:113408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.162 [2024-12-08 05:15:44.909724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.162 [2024-12-08 05:15:44.909741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:112776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.162 [2024-12-08 05:15:44.909755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.162 [2024-12-08 05:15:44.909771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:112800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.162 [2024-12-08 05:15:44.909785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.162 [2024-12-08 05:15:44.909802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:112808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.162 [2024-12-08 05:15:44.909816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.162 [2024-12-08 05:15:44.909832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:112816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.162 [2024-12-08 05:15:44.909845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.162 [2024-12-08 05:15:44.909862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:112824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.162 [2024-12-08 05:15:44.909876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.162 [2024-12-08 05:15:44.909892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.162 [2024-12-08 05:15:44.909906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.162 [2024-12-08 05:15:44.909922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:112856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.162 [2024-12-08 05:15:44.909936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.162 [2024-12-08 05:15:44.909953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:112880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.162 [2024-12-08 05:15:44.909983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.162 [2024-12-08 05:15:44.910003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:113416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.162 [2024-12-08 05:15:44.910018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.162 [2024-12-08 05:15:44.910033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:113424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.162 [2024-12-08 05:15:44.910048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.162 [2024-12-08 05:15:44.910072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:113432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.162 [2024-12-08 05:15:44.910087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.162 [2024-12-08 05:15:44.910103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:113440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.162 [2024-12-08 05:15:44.910117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.162 [2024-12-08 05:15:44.910133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:113448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.162 [2024-12-08 05:15:44.910148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.162 [2024-12-08 05:15:44.910164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:113456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.162 [2024-12-08 05:15:44.910178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.162 [2024-12-08 05:15:44.910194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:113464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.162 [2024-12-08 05:15:44.910208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.162 [2024-12-08 05:15:44.910224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:113472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.162 [2024-12-08 05:15:44.910239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.162 [2024-12-08 05:15:44.910255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:113480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.162 [2024-12-08 05:15:44.910270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.162 [2024-12-08 05:15:44.910286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:113488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.162 [2024-12-08 05:15:44.910301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.162 [2024-12-08 05:15:44.910317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:113496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.162 [2024-12-08 05:15:44.910331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.162 [2024-12-08 05:15:44.910348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:113504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.162 [2024-12-08 05:15:44.910362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.162 [2024-12-08 05:15:44.910378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:113512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.162 [2024-12-08 05:15:44.910392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.162 [2024-12-08 05:15:44.910408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.162 [2024-12-08 05:15:44.910423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.162 [2024-12-08 05:15:44.910439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:113528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.162 [2024-12-08 05:15:44.910460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.162 [2024-12-08 05:15:44.910477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:113536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.162 [2024-12-08 05:15:44.910493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.162 [2024-12-08 05:15:44.910510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:113544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.162 [2024-12-08 05:15:44.910524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.162 [2024-12-08 05:15:44.910540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:113552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.162 [2024-12-08 05:15:44.910554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.162 [2024-12-08 05:15:44.910570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:113560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.162 [2024-12-08 05:15:44.910585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.162 [2024-12-08 05:15:44.910601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:113568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.162 [2024-12-08 05:15:44.910615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.162 [2024-12-08 05:15:44.910631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:113576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.162 [2024-12-08 05:15:44.910645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.162 [2024-12-08 05:15:44.910661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:113584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.162 [2024-12-08 05:15:44.910687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.162 [2024-12-08 05:15:44.910705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:113592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.162 [2024-12-08 05:15:44.910719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.162 [2024-12-08 05:15:44.910736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:113600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.162 [2024-12-08 05:15:44.910750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.163 [2024-12-08 05:15:44.910768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:112904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.163 [2024-12-08 05:15:44.910783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.163 [2024-12-08 05:15:44.910800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:112936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.163 [2024-12-08 05:15:44.910815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.163 [2024-12-08 05:15:44.910831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:112952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.163 [2024-12-08 05:15:44.910845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.163 [2024-12-08 05:15:44.910868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:112960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.163 [2024-12-08 05:15:44.910884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.163 [2024-12-08 05:15:44.910900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:112984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.163 [2024-12-08 05:15:44.910914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.163 [2024-12-08 05:15:44.910930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:112992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.163 [2024-12-08 05:15:44.910944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.163 [2024-12-08 05:15:44.910960] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6867e0 is same with the state(5) to be set 00:17:10.163 [2024-12-08 05:15:44.910978] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:10.163 [2024-12-08 05:15:44.910989] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:10.163 [2024-12-08 05:15:44.911002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113016 len:8 PRP1 0x0 PRP2 0x0 00:17:10.163 [2024-12-08 05:15:44.911016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.163 [2024-12-08 05:15:44.911065] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6867e0 was disconnected and freed. reset controller. 00:17:10.163 [2024-12-08 05:15:44.911083] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:17:10.163 [2024-12-08 05:15:44.911140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:10.163 [2024-12-08 05:15:44.911163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.163 [2024-12-08 05:15:44.911179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:10.163 [2024-12-08 05:15:44.911193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.163 [2024-12-08 05:15:44.911208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:10.163 [2024-12-08 05:15:44.911222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.163 [2024-12-08 05:15:44.911236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:10.163 [2024-12-08 05:15:44.911250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.163 [2024-12-08 05:15:44.911264] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:10.163 [2024-12-08 05:15:44.913962] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:10.163 [2024-12-08 05:15:44.914038] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x689820 (9): Bad file descriptor 00:17:10.163 [2024-12-08 05:15:44.946273] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:10.163 [2024-12-08 05:15:48.565858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:90896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.163 [2024-12-08 05:15:48.565907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.163 [2024-12-08 05:15:48.565957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:90904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.163 [2024-12-08 05:15:48.565976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.163 [2024-12-08 05:15:48.565998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:90912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.163 [2024-12-08 05:15:48.566027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.163 [2024-12-08 05:15:48.566047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.163 [2024-12-08 05:15:48.566063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.163 [2024-12-08 05:15:48.566079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:90928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.163 [2024-12-08 05:15:48.566094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.163 [2024-12-08 05:15:48.566110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:90936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.163 [2024-12-08 05:15:48.566125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.163 [2024-12-08 05:15:48.566141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:90240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.163 [2024-12-08 05:15:48.566155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.163 [2024-12-08 05:15:48.566171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:90248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.163 [2024-12-08 05:15:48.566185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.163 [2024-12-08 05:15:48.566201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:90256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.163 [2024-12-08 05:15:48.566215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.163 [2024-12-08 05:15:48.566231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:90264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.163 [2024-12-08 05:15:48.566245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.163 [2024-12-08 05:15:48.566261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:90272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.163 [2024-12-08 05:15:48.566275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.163 [2024-12-08 05:15:48.566291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:90288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.163 [2024-12-08 05:15:48.566305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.163 [2024-12-08 05:15:48.566322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:90296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.163 [2024-12-08 05:15:48.566337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.163 [2024-12-08 05:15:48.566353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:90304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.163 [2024-12-08 05:15:48.566367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.163 [2024-12-08 05:15:48.566394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:90968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.163 [2024-12-08 05:15:48.566409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.163 [2024-12-08 05:15:48.566425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:90976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.163 [2024-12-08 05:15:48.566439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.163 [2024-12-08 05:15:48.566455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:90984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.163 [2024-12-08 05:15:48.566471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.163 [2024-12-08 05:15:48.566488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:90992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.163 [2024-12-08 05:15:48.566502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.163 [2024-12-08 05:15:48.566518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:91008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.163 [2024-12-08 05:15:48.566533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.163 [2024-12-08 05:15:48.566549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:91016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.163 [2024-12-08 05:15:48.566563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.163 [2024-12-08 05:15:48.566579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:91024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.163 [2024-12-08 05:15:48.566593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.163 [2024-12-08 05:15:48.566609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:91032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.163 [2024-12-08 05:15:48.566623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.163 [2024-12-08 05:15:48.566639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:91040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.163 [2024-12-08 05:15:48.566653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.163 [2024-12-08 05:15:48.566670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:91048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.163 [2024-12-08 05:15:48.566702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.163 [2024-12-08 05:15:48.566719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:90352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.163 [2024-12-08 05:15:48.566734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.164 [2024-12-08 05:15:48.566750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:90392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.164 [2024-12-08 05:15:48.566764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.164 [2024-12-08 05:15:48.566780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:90408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.164 [2024-12-08 05:15:48.566804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.164 [2024-12-08 05:15:48.566821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:90432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.164 [2024-12-08 05:15:48.566835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.164 [2024-12-08 05:15:48.566851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:90456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.164 [2024-12-08 05:15:48.566865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.164 [2024-12-08 05:15:48.566881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:90480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.164 [2024-12-08 05:15:48.566895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.164 [2024-12-08 05:15:48.566911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:90496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.164 [2024-12-08 05:15:48.566925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.164 [2024-12-08 05:15:48.566941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:90512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.164 [2024-12-08 05:15:48.566955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.164 [2024-12-08 05:15:48.566971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:91056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.164 [2024-12-08 05:15:48.566987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.164 [2024-12-08 05:15:48.567014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:91064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.164 [2024-12-08 05:15:48.567035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.164 [2024-12-08 05:15:48.567052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:91072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.164 [2024-12-08 05:15:48.567066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.164 [2024-12-08 05:15:48.567082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:91080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.164 [2024-12-08 05:15:48.567097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.164 [2024-12-08 05:15:48.567113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:91088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.164 [2024-12-08 05:15:48.567127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.164 [2024-12-08 05:15:48.567143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:91096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.164 [2024-12-08 05:15:48.567157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.164 [2024-12-08 05:15:48.567173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:91104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.164 [2024-12-08 05:15:48.567187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.164 [2024-12-08 05:15:48.567212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:91112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.164 [2024-12-08 05:15:48.567227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.164 [2024-12-08 05:15:48.567243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:91120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.164 [2024-12-08 05:15:48.567257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.164 [2024-12-08 05:15:48.567273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:91128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.164 [2024-12-08 05:15:48.567287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.164 [2024-12-08 05:15:48.567303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:91136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.164 [2024-12-08 05:15:48.567317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.164 [2024-12-08 05:15:48.567334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:91144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.164 [2024-12-08 05:15:48.567348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.164 [2024-12-08 05:15:48.567364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:91152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.164 [2024-12-08 05:15:48.567379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.164 [2024-12-08 05:15:48.567407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:91160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.164 [2024-12-08 05:15:48.567425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.164 [2024-12-08 05:15:48.567442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:91168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.164 [2024-12-08 05:15:48.567456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.164 [2024-12-08 05:15:48.567472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:91176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.164 [2024-12-08 05:15:48.567486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.164 [2024-12-08 05:15:48.567503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:91184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.164 [2024-12-08 05:15:48.567523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.164 [2024-12-08 05:15:48.567539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:91192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.164 [2024-12-08 05:15:48.567553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.164 [2024-12-08 05:15:48.567570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:91200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.164 [2024-12-08 05:15:48.567584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.164 [2024-12-08 05:15:48.567601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:91208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.164 [2024-12-08 05:15:48.567628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.164 [2024-12-08 05:15:48.567646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:91216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.164 [2024-12-08 05:15:48.567660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.164 [2024-12-08 05:15:48.567688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:90520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.164 [2024-12-08 05:15:48.567706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.164 [2024-12-08 05:15:48.567722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:90536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.164 [2024-12-08 05:15:48.567737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.164 [2024-12-08 05:15:48.567753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:90560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.164 [2024-12-08 05:15:48.567767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.164 [2024-12-08 05:15:48.567783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:90568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.164 [2024-12-08 05:15:48.567797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.164 [2024-12-08 05:15:48.567813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:90576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.164 [2024-12-08 05:15:48.567827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.164 [2024-12-08 05:15:48.567843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:90592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.164 [2024-12-08 05:15:48.567857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.165 [2024-12-08 05:15:48.567873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:90608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.165 [2024-12-08 05:15:48.567887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.165 [2024-12-08 05:15:48.567904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:90616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.165 [2024-12-08 05:15:48.567918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.165 [2024-12-08 05:15:48.567934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:91224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.165 [2024-12-08 05:15:48.567948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.165 [2024-12-08 05:15:48.567965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.165 [2024-12-08 05:15:48.567979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.165 [2024-12-08 05:15:48.568001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.165 [2024-12-08 05:15:48.568026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.165 [2024-12-08 05:15:48.568052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:91248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.165 [2024-12-08 05:15:48.568068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.165 [2024-12-08 05:15:48.568085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:91256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.165 [2024-12-08 05:15:48.568099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.165 [2024-12-08 05:15:48.568115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.165 [2024-12-08 05:15:48.568129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.165 [2024-12-08 05:15:48.568146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:91272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.165 [2024-12-08 05:15:48.568160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.165 [2024-12-08 05:15:48.568176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:91280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.165 [2024-12-08 05:15:48.568190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.165 [2024-12-08 05:15:48.568206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:91288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.165 [2024-12-08 05:15:48.568219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.165 [2024-12-08 05:15:48.568235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:91296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.165 [2024-12-08 05:15:48.568249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.165 [2024-12-08 05:15:48.568265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:91304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.165 [2024-12-08 05:15:48.568279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.165 [2024-12-08 05:15:48.568295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:91312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.165 [2024-12-08 05:15:48.568309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.165 [2024-12-08 05:15:48.568325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:91320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.165 [2024-12-08 05:15:48.568340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.165 [2024-12-08 05:15:48.568355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:91328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.165 [2024-12-08 05:15:48.568369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.165 [2024-12-08 05:15:48.568385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:91336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.165 [2024-12-08 05:15:48.568399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.165 [2024-12-08 05:15:48.568415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:90640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.165 [2024-12-08 05:15:48.568430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.165 [2024-12-08 05:15:48.568454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:90648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.165 [2024-12-08 05:15:48.568469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.165 [2024-12-08 05:15:48.568485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:90656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.165 [2024-12-08 05:15:48.568499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.165 [2024-12-08 05:15:48.568515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:90664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.165 [2024-12-08 05:15:48.568529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.165 [2024-12-08 05:15:48.568546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:90680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.165 [2024-12-08 05:15:48.568560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.165 [2024-12-08 05:15:48.568577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:90704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.165 [2024-12-08 05:15:48.568591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.165 [2024-12-08 05:15:48.568607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:90720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.165 [2024-12-08 05:15:48.568621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.165 [2024-12-08 05:15:48.568637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:90736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.165 [2024-12-08 05:15:48.568651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.165 [2024-12-08 05:15:48.568667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:91344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.165 [2024-12-08 05:15:48.568696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.165 [2024-12-08 05:15:48.568714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:91352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.165 [2024-12-08 05:15:48.568728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.165 [2024-12-08 05:15:48.568744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:91360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.165 [2024-12-08 05:15:48.568758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.165 [2024-12-08 05:15:48.568774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:91368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.165 [2024-12-08 05:15:48.568788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.165 [2024-12-08 05:15:48.568804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:91376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.165 [2024-12-08 05:15:48.568818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.165 [2024-12-08 05:15:48.568834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.165 [2024-12-08 05:15:48.568855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.165 [2024-12-08 05:15:48.568872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:91392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.165 [2024-12-08 05:15:48.568886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.165 [2024-12-08 05:15:48.568902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.165 [2024-12-08 05:15:48.568916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.165 [2024-12-08 05:15:48.568932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:91408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.165 [2024-12-08 05:15:48.568950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.165 [2024-12-08 05:15:48.568968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:91416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.165 [2024-12-08 05:15:48.568983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.165 [2024-12-08 05:15:48.569008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:91424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.165 [2024-12-08 05:15:48.569029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.165 [2024-12-08 05:15:48.569046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:91432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.165 [2024-12-08 05:15:48.569061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.165 [2024-12-08 05:15:48.569078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:91440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.166 [2024-12-08 05:15:48.569092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.166 [2024-12-08 05:15:48.569108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:91448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.166 [2024-12-08 05:15:48.569122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.166 [2024-12-08 05:15:48.569138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:91456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.166 [2024-12-08 05:15:48.569153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.166 [2024-12-08 05:15:48.569169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:90760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.166 [2024-12-08 05:15:48.569183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.166 [2024-12-08 05:15:48.569198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:90768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.166 [2024-12-08 05:15:48.569212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.166 [2024-12-08 05:15:48.569228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:90784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.166 [2024-12-08 05:15:48.569247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.166 [2024-12-08 05:15:48.569273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:90800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.166 [2024-12-08 05:15:48.569288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.166 [2024-12-08 05:15:48.569304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:90824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.166 [2024-12-08 05:15:48.569318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.166 [2024-12-08 05:15:48.569335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:90864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.166 [2024-12-08 05:15:48.569349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.166 [2024-12-08 05:15:48.569365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:90872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.166 [2024-12-08 05:15:48.569380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.166 [2024-12-08 05:15:48.569395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:90880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.166 [2024-12-08 05:15:48.569410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.166 [2024-12-08 05:15:48.569426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:91464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.166 [2024-12-08 05:15:48.569440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.166 [2024-12-08 05:15:48.569456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:91472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.166 [2024-12-08 05:15:48.569472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.166 [2024-12-08 05:15:48.569489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:91480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.166 [2024-12-08 05:15:48.569503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.166 [2024-12-08 05:15:48.569519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:91488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.166 [2024-12-08 05:15:48.569533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.166 [2024-12-08 05:15:48.569549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:91496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.166 [2024-12-08 05:15:48.569563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.166 [2024-12-08 05:15:48.569579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:91504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.166 [2024-12-08 05:15:48.569594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.166 [2024-12-08 05:15:48.569609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:91512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.166 [2024-12-08 05:15:48.569623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.166 [2024-12-08 05:15:48.569639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:91520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.166 [2024-12-08 05:15:48.569660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.166 [2024-12-08 05:15:48.569690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:91528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.166 [2024-12-08 05:15:48.569707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.166 [2024-12-08 05:15:48.569723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:91536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.166 [2024-12-08 05:15:48.569737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.166 [2024-12-08 05:15:48.569753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:91544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.166 [2024-12-08 05:15:48.569770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.166 [2024-12-08 05:15:48.569786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:91552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.166 [2024-12-08 05:15:48.569800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.166 [2024-12-08 05:15:48.569817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:91560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.166 [2024-12-08 05:15:48.569831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.166 [2024-12-08 05:15:48.569847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:91568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.166 [2024-12-08 05:15:48.569861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.166 [2024-12-08 05:15:48.569877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:91576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.166 [2024-12-08 05:15:48.569891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.166 [2024-12-08 05:15:48.569908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:91584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.166 [2024-12-08 05:15:48.569922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.166 [2024-12-08 05:15:48.569938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:90888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.166 [2024-12-08 05:15:48.569952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.166 [2024-12-08 05:15:48.569968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:90944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.166 [2024-12-08 05:15:48.569984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.166 [2024-12-08 05:15:48.570012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:90952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.166 [2024-12-08 05:15:48.570033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.166 [2024-12-08 05:15:48.570049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:90960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.166 [2024-12-08 05:15:48.570064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.166 [2024-12-08 05:15:48.570080] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x687440 is same with the state(5) to be set 00:17:10.166 [2024-12-08 05:15:48.570106] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:10.166 [2024-12-08 05:15:48.570118] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:10.166 [2024-12-08 05:15:48.570130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91000 len:8 PRP1 0x0 PRP2 0x0 00:17:10.166 [2024-12-08 05:15:48.570143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.166 [2024-12-08 05:15:48.570192] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x687440 was disconnected and freed. reset controller. 00:17:10.166 [2024-12-08 05:15:48.570211] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:17:10.166 [2024-12-08 05:15:48.570269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:10.166 [2024-12-08 05:15:48.570292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.166 [2024-12-08 05:15:48.570308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:10.166 [2024-12-08 05:15:48.570322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.166 [2024-12-08 05:15:48.570337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:10.166 [2024-12-08 05:15:48.570351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.167 [2024-12-08 05:15:48.570368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:10.167 [2024-12-08 05:15:48.570383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.167 [2024-12-08 05:15:48.570397] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:10.167 [2024-12-08 05:15:48.570446] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x689820 (9): Bad file descriptor 00:17:10.167 [2024-12-08 05:15:48.572861] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:10.167 [2024-12-08 05:15:48.600885] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:10.167 [2024-12-08 05:15:53.263214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:40856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.167 [2024-12-08 05:15:53.263296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.167 [2024-12-08 05:15:53.263344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:40864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.167 [2024-12-08 05:15:53.263377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.167 [2024-12-08 05:15:53.263430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:40880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.167 [2024-12-08 05:15:53.263462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.167 [2024-12-08 05:15:53.263495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:40192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.167 [2024-12-08 05:15:53.263526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.167 [2024-12-08 05:15:53.263557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:40208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.167 [2024-12-08 05:15:53.263617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.167 [2024-12-08 05:15:53.263647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:40224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.167 [2024-12-08 05:15:53.263691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.167 [2024-12-08 05:15:53.263723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:40232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.167 [2024-12-08 05:15:53.263749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.167 [2024-12-08 05:15:53.263778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.167 [2024-12-08 05:15:53.263807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.167 [2024-12-08 05:15:53.263839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:40256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.167 [2024-12-08 05:15:53.263869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.167 [2024-12-08 05:15:53.263900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:40264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.167 [2024-12-08 05:15:53.263928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.167 [2024-12-08 05:15:53.263956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:40272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.167 [2024-12-08 05:15:53.263981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.167 [2024-12-08 05:15:53.264008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:40912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.167 [2024-12-08 05:15:53.264032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.167 [2024-12-08 05:15:53.264060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:40920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.167 [2024-12-08 05:15:53.264087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.167 [2024-12-08 05:15:53.264113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.167 [2024-12-08 05:15:53.264136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.167 [2024-12-08 05:15:53.264162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.167 [2024-12-08 05:15:53.264187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.167 [2024-12-08 05:15:53.264213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.167 [2024-12-08 05:15:53.264237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.167 [2024-12-08 05:15:53.264262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:40952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.167 [2024-12-08 05:15:53.264289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.167 [2024-12-08 05:15:53.264340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:40960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.167 [2024-12-08 05:15:53.264368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.167 [2024-12-08 05:15:53.264398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:40968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.167 [2024-12-08 05:15:53.264426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.167 [2024-12-08 05:15:53.264458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:40976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.167 [2024-12-08 05:15:53.264485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.167 [2024-12-08 05:15:53.264511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:40280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.167 [2024-12-08 05:15:53.264530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.167 [2024-12-08 05:15:53.264546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:40288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.167 [2024-12-08 05:15:53.264562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.167 [2024-12-08 05:15:53.264588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:40312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.167 [2024-12-08 05:15:53.264616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.167 [2024-12-08 05:15:53.264645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:40320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.167 [2024-12-08 05:15:53.264668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.167 [2024-12-08 05:15:53.264717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.167 [2024-12-08 05:15:53.264742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.167 [2024-12-08 05:15:53.264768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:40344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.167 [2024-12-08 05:15:53.264792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.167 [2024-12-08 05:15:53.264818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:40360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.167 [2024-12-08 05:15:53.264842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.167 [2024-12-08 05:15:53.264873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:40400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.167 [2024-12-08 05:15:53.264896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.167 [2024-12-08 05:15:53.264923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:40984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.167 [2024-12-08 05:15:53.264945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.167 [2024-12-08 05:15:53.264971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:40992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.167 [2024-12-08 05:15:53.265035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.167 [2024-12-08 05:15:53.265070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:41000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.167 [2024-12-08 05:15:53.265100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.167 [2024-12-08 05:15:53.265129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:41008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.167 [2024-12-08 05:15:53.265154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.167 [2024-12-08 05:15:53.265183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:41016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.167 [2024-12-08 05:15:53.265213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.167 [2024-12-08 05:15:53.265243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.167 [2024-12-08 05:15:53.265272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.167 [2024-12-08 05:15:53.265303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:41032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.167 [2024-12-08 05:15:53.265329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.167 [2024-12-08 05:15:53.265360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:41040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.167 [2024-12-08 05:15:53.265389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.167 [2024-12-08 05:15:53.265419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:41048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.167 [2024-12-08 05:15:53.265447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.167 [2024-12-08 05:15:53.265480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:41056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.168 [2024-12-08 05:15:53.265509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.168 [2024-12-08 05:15:53.265539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:41064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.168 [2024-12-08 05:15:53.265568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.168 [2024-12-08 05:15:53.265599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:41072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.168 [2024-12-08 05:15:53.265629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.168 [2024-12-08 05:15:53.265659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:41080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.168 [2024-12-08 05:15:53.265717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.168 [2024-12-08 05:15:53.265751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:41088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.168 [2024-12-08 05:15:53.265780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.168 [2024-12-08 05:15:53.265832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:41096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.168 [2024-12-08 05:15:53.265863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.168 [2024-12-08 05:15:53.265895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:41104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.168 [2024-12-08 05:15:53.265923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.168 [2024-12-08 05:15:53.265954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:41112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.168 [2024-12-08 05:15:53.265983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.168 [2024-12-08 05:15:53.266014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:40416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.168 [2024-12-08 05:15:53.266054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.168 [2024-12-08 05:15:53.266087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:40432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.168 [2024-12-08 05:15:53.266117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.168 [2024-12-08 05:15:53.266149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:40456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.168 [2024-12-08 05:15:53.266178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.168 [2024-12-08 05:15:53.266209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:40472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.168 [2024-12-08 05:15:53.266238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.168 [2024-12-08 05:15:53.266269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:40480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.168 [2024-12-08 05:15:53.266297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.168 [2024-12-08 05:15:53.266328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:40488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.168 [2024-12-08 05:15:53.266356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.168 [2024-12-08 05:15:53.266385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:40520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.168 [2024-12-08 05:15:53.266414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.168 [2024-12-08 05:15:53.266445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:40568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.168 [2024-12-08 05:15:53.266473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.168 [2024-12-08 05:15:53.266505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:41120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.168 [2024-12-08 05:15:53.266534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.168 [2024-12-08 05:15:53.266566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:41128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.168 [2024-12-08 05:15:53.266612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.168 [2024-12-08 05:15:53.266647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:41136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.168 [2024-12-08 05:15:53.266695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.168 [2024-12-08 05:15:53.266733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:41144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.168 [2024-12-08 05:15:53.266763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.168 [2024-12-08 05:15:53.266795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:41152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.168 [2024-12-08 05:15:53.266824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.168 [2024-12-08 05:15:53.266855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:41160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.168 [2024-12-08 05:15:53.266883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.168 [2024-12-08 05:15:53.266914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:41168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.168 [2024-12-08 05:15:53.266942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.168 [2024-12-08 05:15:53.266973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.168 [2024-12-08 05:15:53.267001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.168 [2024-12-08 05:15:53.267032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:41184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.168 [2024-12-08 05:15:53.267061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.168 [2024-12-08 05:15:53.267093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:41192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.168 [2024-12-08 05:15:53.267122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.168 [2024-12-08 05:15:53.267152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:41200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.168 [2024-12-08 05:15:53.267182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.168 [2024-12-08 05:15:53.267212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:41208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.168 [2024-12-08 05:15:53.267241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.168 [2024-12-08 05:15:53.267272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:41216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.168 [2024-12-08 05:15:53.267298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.168 [2024-12-08 05:15:53.267329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:41224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.168 [2024-12-08 05:15:53.267357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.168 [2024-12-08 05:15:53.267388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:41232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.168 [2024-12-08 05:15:53.267447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.168 [2024-12-08 05:15:53.267478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:41240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.168 [2024-12-08 05:15:53.267506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.168 [2024-12-08 05:15:53.267537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:41248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.168 [2024-12-08 05:15:53.267563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.168 [2024-12-08 05:15:53.267590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:41256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.168 [2024-12-08 05:15:53.267616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.168 [2024-12-08 05:15:53.267642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:41264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.169 [2024-12-08 05:15:53.267667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.169 [2024-12-08 05:15:53.267718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:41272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.169 [2024-12-08 05:15:53.267743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.169 [2024-12-08 05:15:53.267771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:40576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.169 [2024-12-08 05:15:53.267806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.169 [2024-12-08 05:15:53.267833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:40600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.169 [2024-12-08 05:15:53.267859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.169 [2024-12-08 05:15:53.267885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:40608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.169 [2024-12-08 05:15:53.267909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.169 [2024-12-08 05:15:53.267936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:40632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.169 [2024-12-08 05:15:53.267958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.169 [2024-12-08 05:15:53.267985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:40648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.169 [2024-12-08 05:15:53.268008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.169 [2024-12-08 05:15:53.268035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:40656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.169 [2024-12-08 05:15:53.268063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.169 [2024-12-08 05:15:53.268093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:40664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.169 [2024-12-08 05:15:53.268121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.169 [2024-12-08 05:15:53.268171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:40680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.169 [2024-12-08 05:15:53.268201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.169 [2024-12-08 05:15:53.268232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:41280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.169 [2024-12-08 05:15:53.268258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.169 [2024-12-08 05:15:53.268289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:41288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.169 [2024-12-08 05:15:53.268317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.169 [2024-12-08 05:15:53.268348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:41296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.169 [2024-12-08 05:15:53.268375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.169 [2024-12-08 05:15:53.268406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:41304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.169 [2024-12-08 05:15:53.268434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.169 [2024-12-08 05:15:53.268464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:41312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.169 [2024-12-08 05:15:53.268493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.169 [2024-12-08 05:15:53.268522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:41320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.169 [2024-12-08 05:15:53.268547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.169 [2024-12-08 05:15:53.268574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.169 [2024-12-08 05:15:53.268598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.169 [2024-12-08 05:15:53.268627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:41336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.169 [2024-12-08 05:15:53.268653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.169 [2024-12-08 05:15:53.268698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:41344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.169 [2024-12-08 05:15:53.268728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.169 [2024-12-08 05:15:53.268756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:41352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.169 [2024-12-08 05:15:53.268780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.169 [2024-12-08 05:15:53.268805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.169 [2024-12-08 05:15:53.268828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.169 [2024-12-08 05:15:53.268854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:41368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.169 [2024-12-08 05:15:53.268896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.169 [2024-12-08 05:15:53.268925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:41376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.169 [2024-12-08 05:15:53.268949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.169 [2024-12-08 05:15:53.268973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:41384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.169 [2024-12-08 05:15:53.269000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.169 [2024-12-08 05:15:53.269029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:40688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.169 [2024-12-08 05:15:53.269056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.169 [2024-12-08 05:15:53.269086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:40736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.169 [2024-12-08 05:15:53.269115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.169 [2024-12-08 05:15:53.269146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:40744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.169 [2024-12-08 05:15:53.269172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.169 [2024-12-08 05:15:53.269203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:40768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.169 [2024-12-08 05:15:53.269228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.169 [2024-12-08 05:15:53.269254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:40784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.169 [2024-12-08 05:15:53.269276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.169 [2024-12-08 05:15:53.269301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:40792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.169 [2024-12-08 05:15:53.269324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.169 [2024-12-08 05:15:53.269349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:40800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.169 [2024-12-08 05:15:53.269371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.169 [2024-12-08 05:15:53.269397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:40816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.169 [2024-12-08 05:15:53.269422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.169 [2024-12-08 05:15:53.269448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:41392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.169 [2024-12-08 05:15:53.269472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.169 [2024-12-08 05:15:53.269498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:41400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.169 [2024-12-08 05:15:53.269520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.169 [2024-12-08 05:15:53.269564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:41408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.169 [2024-12-08 05:15:53.269591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.169 [2024-12-08 05:15:53.269617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:41416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.169 [2024-12-08 05:15:53.269640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.169 [2024-12-08 05:15:53.269665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:41424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.169 [2024-12-08 05:15:53.269710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.169 [2024-12-08 05:15:53.269739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:41432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.169 [2024-12-08 05:15:53.269764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.169 [2024-12-08 05:15:53.269791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:41440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.169 [2024-12-08 05:15:53.269814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.169 [2024-12-08 05:15:53.269840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:41448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.169 [2024-12-08 05:15:53.269864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.169 [2024-12-08 05:15:53.269892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:41456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.170 [2024-12-08 05:15:53.269915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.170 [2024-12-08 05:15:53.269941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:41464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.170 [2024-12-08 05:15:53.269965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.170 [2024-12-08 05:15:53.269991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:41472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.170 [2024-12-08 05:15:53.270015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.170 [2024-12-08 05:15:53.270042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:41480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.170 [2024-12-08 05:15:53.270069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.170 [2024-12-08 05:15:53.270095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:41488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.170 [2024-12-08 05:15:53.270119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.170 [2024-12-08 05:15:53.270145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:41496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.170 [2024-12-08 05:15:53.270169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.170 [2024-12-08 05:15:53.270195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:41504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.170 [2024-12-08 05:15:53.270218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.170 [2024-12-08 05:15:53.270260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:41512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.170 [2024-12-08 05:15:53.270287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.170 [2024-12-08 05:15:53.270316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:41520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.170 [2024-12-08 05:15:53.270343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.170 [2024-12-08 05:15:53.270371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:40824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.170 [2024-12-08 05:15:53.270397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.170 [2024-12-08 05:15:53.270425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:40832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.170 [2024-12-08 05:15:53.270456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.170 [2024-12-08 05:15:53.270486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:40840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.170 [2024-12-08 05:15:53.270512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.170 [2024-12-08 05:15:53.270542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:40848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.170 [2024-12-08 05:15:53.270569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.170 [2024-12-08 05:15:53.270600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:40872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.170 [2024-12-08 05:15:53.270630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.170 [2024-12-08 05:15:53.270662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:40888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.170 [2024-12-08 05:15:53.270711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.170 [2024-12-08 05:15:53.270743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:40896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.170 [2024-12-08 05:15:53.270769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.170 [2024-12-08 05:15:53.270797] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6adb00 is same with the state(5) to be set 00:17:10.170 [2024-12-08 05:15:53.270838] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:10.170 [2024-12-08 05:15:53.270857] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:10.170 [2024-12-08 05:15:53.270879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40904 len:8 PRP1 0x0 PRP2 0x0 00:17:10.170 [2024-12-08 05:15:53.270907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.170 [2024-12-08 05:15:53.270981] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6adb00 was disconnected and freed. reset controller. 00:17:10.170 [2024-12-08 05:15:53.271020] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:17:10.170 [2024-12-08 05:15:53.271122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:10.170 [2024-12-08 05:15:53.271180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.170 [2024-12-08 05:15:53.271211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:10.170 [2024-12-08 05:15:53.271234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.170 [2024-12-08 05:15:53.271263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:10.170 [2024-12-08 05:15:53.271288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.170 [2024-12-08 05:15:53.271313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:10.170 [2024-12-08 05:15:53.271336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.170 [2024-12-08 05:15:53.271360] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:10.170 [2024-12-08 05:15:53.271457] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x689820 (9): Bad file descriptor 00:17:10.170 [2024-12-08 05:15:53.274297] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:10.170 [2024-12-08 05:15:53.309935] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:10.170 00:17:10.170 Latency(us) 00:17:10.170 [2024-12-08T05:15:59.956Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:10.170 [2024-12-08T05:15:59.956Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:10.170 Verification LBA range: start 0x0 length 0x4000 00:17:10.170 NVMe0n1 : 15.01 12248.55 47.85 276.45 0.00 10200.11 426.36 26333.56 00:17:10.170 [2024-12-08T05:15:59.956Z] =================================================================================================================== 00:17:10.170 [2024-12-08T05:15:59.956Z] Total : 12248.55 47.85 276.45 0.00 10200.11 426.36 26333.56 00:17:10.170 Received shutdown signal, test time was about 15.000000 seconds 00:17:10.170 00:17:10.170 Latency(us) 00:17:10.170 [2024-12-08T05:15:59.956Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:10.170 [2024-12-08T05:15:59.956Z] =================================================================================================================== 00:17:10.170 [2024-12-08T05:15:59.956Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:10.170 05:15:58 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:17:10.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:10.170 05:15:58 -- host/failover.sh@65 -- # count=3 00:17:10.170 05:15:58 -- host/failover.sh@67 -- # (( count != 3 )) 00:17:10.170 05:15:58 -- host/failover.sh@73 -- # bdevperf_pid=82468 00:17:10.170 05:15:58 -- host/failover.sh@75 -- # waitforlisten 82468 /var/tmp/bdevperf.sock 00:17:10.170 05:15:58 -- common/autotest_common.sh@829 -- # '[' -z 82468 ']' 00:17:10.170 05:15:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:10.170 05:15:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:10.170 05:15:58 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:17:10.170 05:15:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:10.170 05:15:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:10.170 05:15:58 -- common/autotest_common.sh@10 -- # set +x 00:17:10.170 05:15:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:10.170 05:15:59 -- common/autotest_common.sh@862 -- # return 0 00:17:10.170 05:15:59 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:10.170 [2024-12-08 05:15:59.587310] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:10.170 05:15:59 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:17:10.170 [2024-12-08 05:15:59.915665] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:17:10.428 05:15:59 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:10.686 NVMe0n1 00:17:10.686 05:16:00 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:10.944 00:17:10.944 05:16:00 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:11.202 00:17:11.202 05:16:00 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:11.202 05:16:00 -- host/failover.sh@82 -- # grep -q NVMe0 00:17:11.460 05:16:01 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:11.718 05:16:01 -- host/failover.sh@87 -- # sleep 3 00:17:15.000 05:16:04 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:15.000 05:16:04 -- host/failover.sh@88 -- # grep -q NVMe0 00:17:15.000 05:16:04 -- host/failover.sh@90 -- # run_test_pid=82537 00:17:15.000 05:16:04 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:15.000 05:16:04 -- host/failover.sh@92 -- # wait 82537 00:17:16.371 0 00:17:16.371 05:16:05 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:16.371 [2024-12-08 05:15:59.044109] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:16.371 [2024-12-08 05:15:59.044211] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82468 ] 00:17:16.371 [2024-12-08 05:15:59.176589] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.371 [2024-12-08 05:15:59.218088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:16.371 [2024-12-08 05:16:01.472032] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:17:16.371 [2024-12-08 05:16:01.472206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:16.371 [2024-12-08 05:16:01.472245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:16.371 [2024-12-08 05:16:01.472275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:16.371 [2024-12-08 05:16:01.472298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:16.371 [2024-12-08 05:16:01.472321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:16.371 [2024-12-08 05:16:01.472344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:16.371 [2024-12-08 05:16:01.472368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:16.371 [2024-12-08 05:16:01.472391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:16.371 [2024-12-08 05:16:01.472415] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:16.371 [2024-12-08 05:16:01.472496] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:16.371 [2024-12-08 05:16:01.472545] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1045820 (9): Bad file descriptor 00:17:16.371 [2024-12-08 05:16:01.475867] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:16.371 Running I/O for 1 seconds... 00:17:16.371 00:17:16.371 Latency(us) 00:17:16.371 [2024-12-08T05:16:06.157Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:16.371 [2024-12-08T05:16:06.157Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:16.371 Verification LBA range: start 0x0 length 0x4000 00:17:16.371 NVMe0n1 : 1.01 12317.31 48.11 0.00 0.00 10336.19 878.78 13405.09 00:17:16.371 [2024-12-08T05:16:06.157Z] =================================================================================================================== 00:17:16.371 [2024-12-08T05:16:06.157Z] Total : 12317.31 48.11 0.00 0.00 10336.19 878.78 13405.09 00:17:16.371 05:16:05 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:16.371 05:16:05 -- host/failover.sh@95 -- # grep -q NVMe0 00:17:16.628 05:16:06 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:16.884 05:16:06 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:16.884 05:16:06 -- host/failover.sh@99 -- # grep -q NVMe0 00:17:17.141 05:16:06 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:17.398 05:16:07 -- host/failover.sh@101 -- # sleep 3 00:17:20.681 05:16:10 -- host/failover.sh@103 -- # grep -q NVMe0 00:17:20.681 05:16:10 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:20.681 05:16:10 -- host/failover.sh@108 -- # killprocess 82468 00:17:20.681 05:16:10 -- common/autotest_common.sh@936 -- # '[' -z 82468 ']' 00:17:20.681 05:16:10 -- common/autotest_common.sh@940 -- # kill -0 82468 00:17:20.681 05:16:10 -- common/autotest_common.sh@941 -- # uname 00:17:20.681 05:16:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:20.681 05:16:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82468 00:17:20.940 killing process with pid 82468 00:17:20.940 05:16:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:20.940 05:16:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:20.940 05:16:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82468' 00:17:20.940 05:16:10 -- common/autotest_common.sh@955 -- # kill 82468 00:17:20.940 05:16:10 -- common/autotest_common.sh@960 -- # wait 82468 00:17:20.940 05:16:10 -- host/failover.sh@110 -- # sync 00:17:20.940 05:16:10 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:21.198 05:16:10 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:17:21.198 05:16:10 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:21.198 05:16:10 -- host/failover.sh@116 -- # nvmftestfini 00:17:21.198 05:16:10 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:21.198 05:16:10 -- nvmf/common.sh@116 -- # sync 00:17:21.198 05:16:10 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:21.198 05:16:10 -- nvmf/common.sh@119 -- # set +e 00:17:21.198 05:16:10 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:21.198 05:16:10 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:21.198 rmmod nvme_tcp 00:17:21.198 rmmod nvme_fabrics 00:17:21.198 rmmod nvme_keyring 00:17:21.198 05:16:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:21.198 05:16:10 -- nvmf/common.sh@123 -- # set -e 00:17:21.198 05:16:10 -- nvmf/common.sh@124 -- # return 0 00:17:21.198 05:16:10 -- nvmf/common.sh@477 -- # '[' -n 82197 ']' 00:17:21.198 05:16:10 -- nvmf/common.sh@478 -- # killprocess 82197 00:17:21.198 05:16:10 -- common/autotest_common.sh@936 -- # '[' -z 82197 ']' 00:17:21.198 05:16:10 -- common/autotest_common.sh@940 -- # kill -0 82197 00:17:21.198 05:16:10 -- common/autotest_common.sh@941 -- # uname 00:17:21.198 05:16:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:21.198 05:16:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82197 00:17:21.578 killing process with pid 82197 00:17:21.578 05:16:10 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:21.578 05:16:10 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:21.578 05:16:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82197' 00:17:21.578 05:16:10 -- common/autotest_common.sh@955 -- # kill 82197 00:17:21.578 05:16:10 -- common/autotest_common.sh@960 -- # wait 82197 00:17:21.578 05:16:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:21.578 05:16:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:21.578 05:16:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:21.578 05:16:11 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:21.578 05:16:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:21.578 05:16:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:21.578 05:16:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:21.578 05:16:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:21.578 05:16:11 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:21.578 00:17:21.578 real 0m33.130s 00:17:21.578 user 2m9.138s 00:17:21.578 sys 0m5.544s 00:17:21.578 05:16:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:21.578 05:16:11 -- common/autotest_common.sh@10 -- # set +x 00:17:21.578 ************************************ 00:17:21.578 END TEST nvmf_failover 00:17:21.578 ************************************ 00:17:21.578 05:16:11 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:17:21.578 05:16:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:21.578 05:16:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:21.578 05:16:11 -- common/autotest_common.sh@10 -- # set +x 00:17:21.578 ************************************ 00:17:21.578 START TEST nvmf_discovery 00:17:21.578 ************************************ 00:17:21.578 05:16:11 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:17:21.578 * Looking for test storage... 00:17:21.578 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:21.578 05:16:11 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:21.578 05:16:11 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:21.578 05:16:11 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:21.837 05:16:11 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:21.837 05:16:11 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:21.837 05:16:11 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:21.837 05:16:11 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:21.837 05:16:11 -- scripts/common.sh@335 -- # IFS=.-: 00:17:21.837 05:16:11 -- scripts/common.sh@335 -- # read -ra ver1 00:17:21.837 05:16:11 -- scripts/common.sh@336 -- # IFS=.-: 00:17:21.837 05:16:11 -- scripts/common.sh@336 -- # read -ra ver2 00:17:21.837 05:16:11 -- scripts/common.sh@337 -- # local 'op=<' 00:17:21.837 05:16:11 -- scripts/common.sh@339 -- # ver1_l=2 00:17:21.837 05:16:11 -- scripts/common.sh@340 -- # ver2_l=1 00:17:21.837 05:16:11 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:21.837 05:16:11 -- scripts/common.sh@343 -- # case "$op" in 00:17:21.837 05:16:11 -- scripts/common.sh@344 -- # : 1 00:17:21.837 05:16:11 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:21.837 05:16:11 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:21.837 05:16:11 -- scripts/common.sh@364 -- # decimal 1 00:17:21.837 05:16:11 -- scripts/common.sh@352 -- # local d=1 00:17:21.837 05:16:11 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:21.837 05:16:11 -- scripts/common.sh@354 -- # echo 1 00:17:21.837 05:16:11 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:21.837 05:16:11 -- scripts/common.sh@365 -- # decimal 2 00:17:21.837 05:16:11 -- scripts/common.sh@352 -- # local d=2 00:17:21.837 05:16:11 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:21.837 05:16:11 -- scripts/common.sh@354 -- # echo 2 00:17:21.837 05:16:11 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:21.837 05:16:11 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:21.837 05:16:11 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:21.837 05:16:11 -- scripts/common.sh@367 -- # return 0 00:17:21.837 05:16:11 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:21.837 05:16:11 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:21.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.837 --rc genhtml_branch_coverage=1 00:17:21.837 --rc genhtml_function_coverage=1 00:17:21.837 --rc genhtml_legend=1 00:17:21.837 --rc geninfo_all_blocks=1 00:17:21.837 --rc geninfo_unexecuted_blocks=1 00:17:21.837 00:17:21.837 ' 00:17:21.837 05:16:11 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:21.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.837 --rc genhtml_branch_coverage=1 00:17:21.837 --rc genhtml_function_coverage=1 00:17:21.837 --rc genhtml_legend=1 00:17:21.837 --rc geninfo_all_blocks=1 00:17:21.837 --rc geninfo_unexecuted_blocks=1 00:17:21.837 00:17:21.837 ' 00:17:21.837 05:16:11 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:21.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.837 --rc genhtml_branch_coverage=1 00:17:21.837 --rc genhtml_function_coverage=1 00:17:21.837 --rc genhtml_legend=1 00:17:21.837 --rc geninfo_all_blocks=1 00:17:21.837 --rc geninfo_unexecuted_blocks=1 00:17:21.837 00:17:21.837 ' 00:17:21.837 05:16:11 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:21.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.837 --rc genhtml_branch_coverage=1 00:17:21.837 --rc genhtml_function_coverage=1 00:17:21.837 --rc genhtml_legend=1 00:17:21.837 --rc geninfo_all_blocks=1 00:17:21.837 --rc geninfo_unexecuted_blocks=1 00:17:21.837 00:17:21.837 ' 00:17:21.837 05:16:11 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:21.837 05:16:11 -- nvmf/common.sh@7 -- # uname -s 00:17:21.837 05:16:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:21.837 05:16:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:21.837 05:16:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:21.837 05:16:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:21.837 05:16:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:21.837 05:16:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:21.837 05:16:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:21.837 05:16:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:21.837 05:16:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:21.837 05:16:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:21.837 05:16:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:17:21.837 05:16:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:17:21.837 05:16:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:21.837 05:16:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:21.837 05:16:11 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:21.837 05:16:11 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:21.837 05:16:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:21.837 05:16:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:21.837 05:16:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:21.837 05:16:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.837 05:16:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.838 05:16:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.838 05:16:11 -- paths/export.sh@5 -- # export PATH 00:17:21.838 05:16:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.838 05:16:11 -- nvmf/common.sh@46 -- # : 0 00:17:21.838 05:16:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:21.838 05:16:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:21.838 05:16:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:21.838 05:16:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:21.838 05:16:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:21.838 05:16:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:21.838 05:16:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:21.838 05:16:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:21.838 05:16:11 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:17:21.838 05:16:11 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:17:21.838 05:16:11 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:17:21.838 05:16:11 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:17:21.838 05:16:11 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:17:21.838 05:16:11 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:17:21.838 05:16:11 -- host/discovery.sh@25 -- # nvmftestinit 00:17:21.838 05:16:11 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:21.838 05:16:11 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:21.838 05:16:11 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:21.838 05:16:11 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:21.838 05:16:11 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:21.838 05:16:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:21.838 05:16:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:21.838 05:16:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:21.838 05:16:11 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:21.838 05:16:11 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:21.838 05:16:11 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:21.838 05:16:11 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:21.838 05:16:11 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:21.838 05:16:11 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:21.838 05:16:11 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:21.838 05:16:11 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:21.838 05:16:11 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:21.838 05:16:11 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:21.838 05:16:11 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:21.838 05:16:11 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:21.838 05:16:11 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:21.838 05:16:11 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:21.838 05:16:11 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:21.838 05:16:11 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:21.838 05:16:11 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:21.838 05:16:11 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:21.838 05:16:11 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:21.838 05:16:11 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:21.838 Cannot find device "nvmf_tgt_br" 00:17:21.838 05:16:11 -- nvmf/common.sh@154 -- # true 00:17:21.838 05:16:11 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:21.838 Cannot find device "nvmf_tgt_br2" 00:17:21.838 05:16:11 -- nvmf/common.sh@155 -- # true 00:17:21.838 05:16:11 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:21.838 05:16:11 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:21.838 Cannot find device "nvmf_tgt_br" 00:17:21.838 05:16:11 -- nvmf/common.sh@157 -- # true 00:17:21.838 05:16:11 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:21.838 Cannot find device "nvmf_tgt_br2" 00:17:21.838 05:16:11 -- nvmf/common.sh@158 -- # true 00:17:21.838 05:16:11 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:21.838 05:16:11 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:21.838 05:16:11 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:21.838 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:21.838 05:16:11 -- nvmf/common.sh@161 -- # true 00:17:21.838 05:16:11 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:21.838 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:21.838 05:16:11 -- nvmf/common.sh@162 -- # true 00:17:21.838 05:16:11 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:21.838 05:16:11 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:21.838 05:16:11 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:21.838 05:16:11 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:21.838 05:16:11 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:21.838 05:16:11 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:21.838 05:16:11 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:21.838 05:16:11 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:22.100 05:16:11 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:22.100 05:16:11 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:22.100 05:16:11 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:22.100 05:16:11 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:22.100 05:16:11 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:22.100 05:16:11 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:22.100 05:16:11 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:22.100 05:16:11 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:22.100 05:16:11 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:22.100 05:16:11 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:22.100 05:16:11 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:22.100 05:16:11 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:22.100 05:16:11 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:22.100 05:16:11 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:22.100 05:16:11 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:22.100 05:16:11 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:22.100 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:22.100 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:17:22.100 00:17:22.100 --- 10.0.0.2 ping statistics --- 00:17:22.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:22.100 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:17:22.100 05:16:11 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:22.100 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:22.100 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:17:22.100 00:17:22.100 --- 10.0.0.3 ping statistics --- 00:17:22.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:22.100 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:17:22.100 05:16:11 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:22.100 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:22.100 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:17:22.100 00:17:22.100 --- 10.0.0.1 ping statistics --- 00:17:22.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:22.100 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:17:22.100 05:16:11 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:22.100 05:16:11 -- nvmf/common.sh@421 -- # return 0 00:17:22.100 05:16:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:22.100 05:16:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:22.100 05:16:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:22.100 05:16:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:22.100 05:16:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:22.100 05:16:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:22.100 05:16:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:22.100 05:16:11 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:17:22.100 05:16:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:22.100 05:16:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:22.100 05:16:11 -- common/autotest_common.sh@10 -- # set +x 00:17:22.100 05:16:11 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:22.100 05:16:11 -- nvmf/common.sh@469 -- # nvmfpid=82820 00:17:22.100 05:16:11 -- nvmf/common.sh@470 -- # waitforlisten 82820 00:17:22.100 05:16:11 -- common/autotest_common.sh@829 -- # '[' -z 82820 ']' 00:17:22.100 05:16:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.100 05:16:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:22.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.101 05:16:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.101 05:16:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:22.101 05:16:11 -- common/autotest_common.sh@10 -- # set +x 00:17:22.101 [2024-12-08 05:16:11.834284] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:22.101 [2024-12-08 05:16:11.834390] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:22.362 [2024-12-08 05:16:11.973945] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.362 [2024-12-08 05:16:12.012194] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:22.362 [2024-12-08 05:16:12.012367] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:22.362 [2024-12-08 05:16:12.012383] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:22.362 [2024-12-08 05:16:12.012393] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:22.362 [2024-12-08 05:16:12.012422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:23.294 05:16:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:23.294 05:16:12 -- common/autotest_common.sh@862 -- # return 0 00:17:23.294 05:16:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:23.294 05:16:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:23.294 05:16:12 -- common/autotest_common.sh@10 -- # set +x 00:17:23.294 05:16:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:23.294 05:16:12 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:23.294 05:16:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.294 05:16:12 -- common/autotest_common.sh@10 -- # set +x 00:17:23.294 [2024-12-08 05:16:12.935454] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:23.294 05:16:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.294 05:16:12 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:17:23.294 05:16:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.294 05:16:12 -- common/autotest_common.sh@10 -- # set +x 00:17:23.294 [2024-12-08 05:16:12.943620] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:17:23.294 05:16:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.294 05:16:12 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:17:23.294 05:16:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.294 05:16:12 -- common/autotest_common.sh@10 -- # set +x 00:17:23.294 null0 00:17:23.294 05:16:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.294 05:16:12 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:17:23.294 05:16:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.294 05:16:12 -- common/autotest_common.sh@10 -- # set +x 00:17:23.294 null1 00:17:23.294 05:16:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.294 05:16:12 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:17:23.294 05:16:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.294 05:16:12 -- common/autotest_common.sh@10 -- # set +x 00:17:23.294 05:16:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.294 05:16:12 -- host/discovery.sh@45 -- # hostpid=82852 00:17:23.294 05:16:12 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:17:23.295 05:16:12 -- host/discovery.sh@46 -- # waitforlisten 82852 /tmp/host.sock 00:17:23.295 05:16:12 -- common/autotest_common.sh@829 -- # '[' -z 82852 ']' 00:17:23.295 05:16:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:17:23.295 05:16:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:23.295 05:16:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:17:23.295 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:17:23.295 05:16:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:23.295 05:16:12 -- common/autotest_common.sh@10 -- # set +x 00:17:23.295 [2024-12-08 05:16:13.021007] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:23.295 [2024-12-08 05:16:13.021095] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82852 ] 00:17:23.553 [2024-12-08 05:16:13.160787] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.553 [2024-12-08 05:16:13.195499] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:23.553 [2024-12-08 05:16:13.195661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.553 05:16:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:23.553 05:16:13 -- common/autotest_common.sh@862 -- # return 0 00:17:23.553 05:16:13 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:23.553 05:16:13 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:17:23.553 05:16:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.553 05:16:13 -- common/autotest_common.sh@10 -- # set +x 00:17:23.553 05:16:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.553 05:16:13 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:17:23.553 05:16:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.553 05:16:13 -- common/autotest_common.sh@10 -- # set +x 00:17:23.553 05:16:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.553 05:16:13 -- host/discovery.sh@72 -- # notify_id=0 00:17:23.553 05:16:13 -- host/discovery.sh@78 -- # get_subsystem_names 00:17:23.553 05:16:13 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:23.553 05:16:13 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:23.553 05:16:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.553 05:16:13 -- host/discovery.sh@59 -- # sort 00:17:23.553 05:16:13 -- common/autotest_common.sh@10 -- # set +x 00:17:23.553 05:16:13 -- host/discovery.sh@59 -- # xargs 00:17:23.553 05:16:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.811 05:16:13 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:17:23.811 05:16:13 -- host/discovery.sh@79 -- # get_bdev_list 00:17:23.811 05:16:13 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:23.811 05:16:13 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:23.811 05:16:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.811 05:16:13 -- host/discovery.sh@55 -- # sort 00:17:23.811 05:16:13 -- common/autotest_common.sh@10 -- # set +x 00:17:23.811 05:16:13 -- host/discovery.sh@55 -- # xargs 00:17:23.811 05:16:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.811 05:16:13 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:17:23.811 05:16:13 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:17:23.811 05:16:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.811 05:16:13 -- common/autotest_common.sh@10 -- # set +x 00:17:23.811 05:16:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.811 05:16:13 -- host/discovery.sh@82 -- # get_subsystem_names 00:17:23.811 05:16:13 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:23.811 05:16:13 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:23.811 05:16:13 -- host/discovery.sh@59 -- # sort 00:17:23.811 05:16:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.811 05:16:13 -- common/autotest_common.sh@10 -- # set +x 00:17:23.811 05:16:13 -- host/discovery.sh@59 -- # xargs 00:17:23.811 05:16:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.811 05:16:13 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:17:23.811 05:16:13 -- host/discovery.sh@83 -- # get_bdev_list 00:17:23.811 05:16:13 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:23.811 05:16:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.811 05:16:13 -- common/autotest_common.sh@10 -- # set +x 00:17:23.811 05:16:13 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:23.811 05:16:13 -- host/discovery.sh@55 -- # sort 00:17:23.812 05:16:13 -- host/discovery.sh@55 -- # xargs 00:17:23.812 05:16:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.812 05:16:13 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:17:23.812 05:16:13 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:17:23.812 05:16:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.812 05:16:13 -- common/autotest_common.sh@10 -- # set +x 00:17:23.812 05:16:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.812 05:16:13 -- host/discovery.sh@86 -- # get_subsystem_names 00:17:23.812 05:16:13 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:23.812 05:16:13 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:23.812 05:16:13 -- host/discovery.sh@59 -- # xargs 00:17:23.812 05:16:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.812 05:16:13 -- host/discovery.sh@59 -- # sort 00:17:23.812 05:16:13 -- common/autotest_common.sh@10 -- # set +x 00:17:23.812 05:16:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.069 05:16:13 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:17:24.069 05:16:13 -- host/discovery.sh@87 -- # get_bdev_list 00:17:24.069 05:16:13 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:24.069 05:16:13 -- host/discovery.sh@55 -- # xargs 00:17:24.069 05:16:13 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:24.069 05:16:13 -- host/discovery.sh@55 -- # sort 00:17:24.069 05:16:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.069 05:16:13 -- common/autotest_common.sh@10 -- # set +x 00:17:24.069 05:16:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.069 05:16:13 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:17:24.069 05:16:13 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:24.069 05:16:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.069 05:16:13 -- common/autotest_common.sh@10 -- # set +x 00:17:24.069 [2024-12-08 05:16:13.667801] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:24.069 05:16:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.069 05:16:13 -- host/discovery.sh@92 -- # get_subsystem_names 00:17:24.069 05:16:13 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:24.069 05:16:13 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:24.069 05:16:13 -- host/discovery.sh@59 -- # sort 00:17:24.069 05:16:13 -- host/discovery.sh@59 -- # xargs 00:17:24.069 05:16:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.069 05:16:13 -- common/autotest_common.sh@10 -- # set +x 00:17:24.069 05:16:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.069 05:16:13 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:17:24.069 05:16:13 -- host/discovery.sh@93 -- # get_bdev_list 00:17:24.069 05:16:13 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:24.069 05:16:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.069 05:16:13 -- common/autotest_common.sh@10 -- # set +x 00:17:24.069 05:16:13 -- host/discovery.sh@55 -- # sort 00:17:24.069 05:16:13 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:24.069 05:16:13 -- host/discovery.sh@55 -- # xargs 00:17:24.069 05:16:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.069 05:16:13 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:17:24.069 05:16:13 -- host/discovery.sh@94 -- # get_notification_count 00:17:24.069 05:16:13 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:17:24.069 05:16:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.069 05:16:13 -- host/discovery.sh@74 -- # jq '. | length' 00:17:24.069 05:16:13 -- common/autotest_common.sh@10 -- # set +x 00:17:24.069 05:16:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.069 05:16:13 -- host/discovery.sh@74 -- # notification_count=0 00:17:24.069 05:16:13 -- host/discovery.sh@75 -- # notify_id=0 00:17:24.069 05:16:13 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:17:24.069 05:16:13 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:17:24.069 05:16:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.069 05:16:13 -- common/autotest_common.sh@10 -- # set +x 00:17:24.069 05:16:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.069 05:16:13 -- host/discovery.sh@100 -- # sleep 1 00:17:24.636 [2024-12-08 05:16:14.288343] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:24.636 [2024-12-08 05:16:14.288392] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:24.636 [2024-12-08 05:16:14.288413] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:24.636 [2024-12-08 05:16:14.294393] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:17:24.636 [2024-12-08 05:16:14.350384] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:17:24.636 [2024-12-08 05:16:14.350443] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:25.203 05:16:14 -- host/discovery.sh@101 -- # get_subsystem_names 00:17:25.203 05:16:14 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:25.203 05:16:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.203 05:16:14 -- common/autotest_common.sh@10 -- # set +x 00:17:25.203 05:16:14 -- host/discovery.sh@59 -- # sort 00:17:25.203 05:16:14 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:25.203 05:16:14 -- host/discovery.sh@59 -- # xargs 00:17:25.203 05:16:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.203 05:16:14 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.203 05:16:14 -- host/discovery.sh@102 -- # get_bdev_list 00:17:25.203 05:16:14 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:25.203 05:16:14 -- host/discovery.sh@55 -- # xargs 00:17:25.203 05:16:14 -- host/discovery.sh@55 -- # sort 00:17:25.203 05:16:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.203 05:16:14 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:25.203 05:16:14 -- common/autotest_common.sh@10 -- # set +x 00:17:25.203 05:16:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.203 05:16:14 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:17:25.203 05:16:14 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:17:25.203 05:16:14 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:25.203 05:16:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.203 05:16:14 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:25.203 05:16:14 -- host/discovery.sh@63 -- # sort -n 00:17:25.203 05:16:14 -- common/autotest_common.sh@10 -- # set +x 00:17:25.203 05:16:14 -- host/discovery.sh@63 -- # xargs 00:17:25.460 05:16:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.460 05:16:15 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:17:25.460 05:16:15 -- host/discovery.sh@104 -- # get_notification_count 00:17:25.460 05:16:15 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:17:25.460 05:16:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.460 05:16:15 -- common/autotest_common.sh@10 -- # set +x 00:17:25.460 05:16:15 -- host/discovery.sh@74 -- # jq '. | length' 00:17:25.460 05:16:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.460 05:16:15 -- host/discovery.sh@74 -- # notification_count=1 00:17:25.460 05:16:15 -- host/discovery.sh@75 -- # notify_id=1 00:17:25.460 05:16:15 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:17:25.460 05:16:15 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:17:25.460 05:16:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.460 05:16:15 -- common/autotest_common.sh@10 -- # set +x 00:17:25.460 05:16:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.460 05:16:15 -- host/discovery.sh@109 -- # sleep 1 00:17:26.393 05:16:16 -- host/discovery.sh@110 -- # get_bdev_list 00:17:26.393 05:16:16 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:26.393 05:16:16 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:26.393 05:16:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.393 05:16:16 -- host/discovery.sh@55 -- # sort 00:17:26.393 05:16:16 -- common/autotest_common.sh@10 -- # set +x 00:17:26.393 05:16:16 -- host/discovery.sh@55 -- # xargs 00:17:26.393 05:16:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.393 05:16:16 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:26.393 05:16:16 -- host/discovery.sh@111 -- # get_notification_count 00:17:26.393 05:16:16 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:17:26.393 05:16:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.393 05:16:16 -- common/autotest_common.sh@10 -- # set +x 00:17:26.393 05:16:16 -- host/discovery.sh@74 -- # jq '. | length' 00:17:26.393 05:16:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.651 05:16:16 -- host/discovery.sh@74 -- # notification_count=1 00:17:26.651 05:16:16 -- host/discovery.sh@75 -- # notify_id=2 00:17:26.651 05:16:16 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:17:26.651 05:16:16 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:17:26.651 05:16:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.651 05:16:16 -- common/autotest_common.sh@10 -- # set +x 00:17:26.651 [2024-12-08 05:16:16.195233] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:26.651 [2024-12-08 05:16:16.195727] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:17:26.651 [2024-12-08 05:16:16.195763] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:26.651 05:16:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.651 05:16:16 -- host/discovery.sh@117 -- # sleep 1 00:17:26.651 [2024-12-08 05:16:16.201709] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:17:26.651 [2024-12-08 05:16:16.261990] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:17:26.651 [2024-12-08 05:16:16.262024] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:26.651 [2024-12-08 05:16:16.262032] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:17:27.589 05:16:17 -- host/discovery.sh@118 -- # get_subsystem_names 00:17:27.589 05:16:17 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:27.589 05:16:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.589 05:16:17 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:27.589 05:16:17 -- host/discovery.sh@59 -- # sort 00:17:27.589 05:16:17 -- common/autotest_common.sh@10 -- # set +x 00:17:27.589 05:16:17 -- host/discovery.sh@59 -- # xargs 00:17:27.589 05:16:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.589 05:16:17 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.589 05:16:17 -- host/discovery.sh@119 -- # get_bdev_list 00:17:27.589 05:16:17 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:27.589 05:16:17 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:27.589 05:16:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.589 05:16:17 -- common/autotest_common.sh@10 -- # set +x 00:17:27.589 05:16:17 -- host/discovery.sh@55 -- # sort 00:17:27.589 05:16:17 -- host/discovery.sh@55 -- # xargs 00:17:27.589 05:16:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.589 05:16:17 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:27.589 05:16:17 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:17:27.589 05:16:17 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:27.589 05:16:17 -- host/discovery.sh@63 -- # sort -n 00:17:27.589 05:16:17 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:27.589 05:16:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.589 05:16:17 -- common/autotest_common.sh@10 -- # set +x 00:17:27.589 05:16:17 -- host/discovery.sh@63 -- # xargs 00:17:27.589 05:16:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.589 05:16:17 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:17:27.589 05:16:17 -- host/discovery.sh@121 -- # get_notification_count 00:17:27.848 05:16:17 -- host/discovery.sh@74 -- # jq '. | length' 00:17:27.848 05:16:17 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:27.848 05:16:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.848 05:16:17 -- common/autotest_common.sh@10 -- # set +x 00:17:27.848 05:16:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.848 05:16:17 -- host/discovery.sh@74 -- # notification_count=0 00:17:27.848 05:16:17 -- host/discovery.sh@75 -- # notify_id=2 00:17:27.848 05:16:17 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:17:27.848 05:16:17 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:27.848 05:16:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.848 05:16:17 -- common/autotest_common.sh@10 -- # set +x 00:17:27.848 [2024-12-08 05:16:17.426419] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:17:27.848 [2024-12-08 05:16:17.426462] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:27.848 [2024-12-08 05:16:17.428129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:27.848 [2024-12-08 05:16:17.428310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.848 [2024-12-08 05:16:17.428451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:27.848 [2024-12-08 05:16:17.428564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.848 [2024-12-08 05:16:17.428713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:27.848 [2024-12-08 05:16:17.428854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.848 [2024-12-08 05:16:17.428973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:27.848 [2024-12-08 05:16:17.429155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.848 [2024-12-08 05:16:17.429212] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5241f0 is same with the state(5) to be set 00:17:27.848 05:16:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.848 05:16:17 -- host/discovery.sh@127 -- # sleep 1 00:17:27.848 [2024-12-08 05:16:17.433427] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:17:27.848 [2024-12-08 05:16:17.433462] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:17:27.848 [2024-12-08 05:16:17.433522] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5241f0 (9): Bad file descriptor 00:17:28.785 05:16:18 -- host/discovery.sh@128 -- # get_subsystem_names 00:17:28.785 05:16:18 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:28.785 05:16:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.785 05:16:18 -- common/autotest_common.sh@10 -- # set +x 00:17:28.785 05:16:18 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:28.785 05:16:18 -- host/discovery.sh@59 -- # sort 00:17:28.785 05:16:18 -- host/discovery.sh@59 -- # xargs 00:17:28.785 05:16:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.785 05:16:18 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.785 05:16:18 -- host/discovery.sh@129 -- # get_bdev_list 00:17:28.785 05:16:18 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:28.785 05:16:18 -- host/discovery.sh@55 -- # xargs 00:17:28.785 05:16:18 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:28.785 05:16:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.785 05:16:18 -- host/discovery.sh@55 -- # sort 00:17:28.785 05:16:18 -- common/autotest_common.sh@10 -- # set +x 00:17:28.785 05:16:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.785 05:16:18 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:28.785 05:16:18 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:17:28.785 05:16:18 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:28.785 05:16:18 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:28.785 05:16:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.785 05:16:18 -- common/autotest_common.sh@10 -- # set +x 00:17:28.785 05:16:18 -- host/discovery.sh@63 -- # sort -n 00:17:28.785 05:16:18 -- host/discovery.sh@63 -- # xargs 00:17:29.043 05:16:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.043 05:16:18 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:17:29.043 05:16:18 -- host/discovery.sh@131 -- # get_notification_count 00:17:29.043 05:16:18 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:29.043 05:16:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.043 05:16:18 -- common/autotest_common.sh@10 -- # set +x 00:17:29.043 05:16:18 -- host/discovery.sh@74 -- # jq '. | length' 00:17:29.043 05:16:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.043 05:16:18 -- host/discovery.sh@74 -- # notification_count=0 00:17:29.043 05:16:18 -- host/discovery.sh@75 -- # notify_id=2 00:17:29.043 05:16:18 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:17:29.043 05:16:18 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:17:29.043 05:16:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.043 05:16:18 -- common/autotest_common.sh@10 -- # set +x 00:17:29.043 05:16:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.043 05:16:18 -- host/discovery.sh@135 -- # sleep 1 00:17:29.979 05:16:19 -- host/discovery.sh@136 -- # get_subsystem_names 00:17:29.979 05:16:19 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:29.979 05:16:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.979 05:16:19 -- common/autotest_common.sh@10 -- # set +x 00:17:29.979 05:16:19 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:29.979 05:16:19 -- host/discovery.sh@59 -- # sort 00:17:29.979 05:16:19 -- host/discovery.sh@59 -- # xargs 00:17:29.979 05:16:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.979 05:16:19 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:17:29.979 05:16:19 -- host/discovery.sh@137 -- # get_bdev_list 00:17:29.979 05:16:19 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:29.979 05:16:19 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:29.979 05:16:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.979 05:16:19 -- common/autotest_common.sh@10 -- # set +x 00:17:29.979 05:16:19 -- host/discovery.sh@55 -- # sort 00:17:29.979 05:16:19 -- host/discovery.sh@55 -- # xargs 00:17:29.979 05:16:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.238 05:16:19 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:17:30.238 05:16:19 -- host/discovery.sh@138 -- # get_notification_count 00:17:30.238 05:16:19 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:30.238 05:16:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.238 05:16:19 -- common/autotest_common.sh@10 -- # set +x 00:17:30.238 05:16:19 -- host/discovery.sh@74 -- # jq '. | length' 00:17:30.238 05:16:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.238 05:16:19 -- host/discovery.sh@74 -- # notification_count=2 00:17:30.238 05:16:19 -- host/discovery.sh@75 -- # notify_id=4 00:17:30.238 05:16:19 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:17:30.238 05:16:19 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:30.238 05:16:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.238 05:16:19 -- common/autotest_common.sh@10 -- # set +x 00:17:31.171 [2024-12-08 05:16:20.841139] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:31.171 [2024-12-08 05:16:20.841181] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:31.171 [2024-12-08 05:16:20.841200] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:31.171 [2024-12-08 05:16:20.847172] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:17:31.171 [2024-12-08 05:16:20.906465] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:17:31.171 [2024-12-08 05:16:20.906516] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:17:31.171 05:16:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.171 05:16:20 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:31.171 05:16:20 -- common/autotest_common.sh@650 -- # local es=0 00:17:31.171 05:16:20 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:31.171 05:16:20 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:31.171 05:16:20 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:31.171 05:16:20 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:31.171 05:16:20 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:31.171 05:16:20 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:31.171 05:16:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.171 05:16:20 -- common/autotest_common.sh@10 -- # set +x 00:17:31.171 request: 00:17:31.171 { 00:17:31.171 "name": "nvme", 00:17:31.171 "trtype": "tcp", 00:17:31.171 "traddr": "10.0.0.2", 00:17:31.171 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:31.171 "adrfam": "ipv4", 00:17:31.171 "trsvcid": "8009", 00:17:31.171 "wait_for_attach": true, 00:17:31.171 "method": "bdev_nvme_start_discovery", 00:17:31.171 "req_id": 1 00:17:31.171 } 00:17:31.171 Got JSON-RPC error response 00:17:31.171 response: 00:17:31.171 { 00:17:31.171 "code": -17, 00:17:31.171 "message": "File exists" 00:17:31.171 } 00:17:31.171 05:16:20 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:31.171 05:16:20 -- common/autotest_common.sh@653 -- # es=1 00:17:31.171 05:16:20 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:31.171 05:16:20 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:31.171 05:16:20 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:31.171 05:16:20 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:17:31.171 05:16:20 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:31.171 05:16:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.171 05:16:20 -- host/discovery.sh@67 -- # sort 00:17:31.171 05:16:20 -- common/autotest_common.sh@10 -- # set +x 00:17:31.171 05:16:20 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:31.171 05:16:20 -- host/discovery.sh@67 -- # xargs 00:17:31.171 05:16:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.429 05:16:20 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:17:31.429 05:16:20 -- host/discovery.sh@147 -- # get_bdev_list 00:17:31.429 05:16:20 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:31.429 05:16:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.429 05:16:20 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:31.429 05:16:20 -- host/discovery.sh@55 -- # sort 00:17:31.429 05:16:20 -- common/autotest_common.sh@10 -- # set +x 00:17:31.429 05:16:20 -- host/discovery.sh@55 -- # xargs 00:17:31.429 05:16:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.429 05:16:21 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:31.429 05:16:21 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:31.429 05:16:21 -- common/autotest_common.sh@650 -- # local es=0 00:17:31.429 05:16:21 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:31.429 05:16:21 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:31.429 05:16:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:31.429 05:16:21 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:31.429 05:16:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:31.429 05:16:21 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:31.429 05:16:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.429 05:16:21 -- common/autotest_common.sh@10 -- # set +x 00:17:31.429 request: 00:17:31.429 { 00:17:31.429 "name": "nvme_second", 00:17:31.429 "trtype": "tcp", 00:17:31.429 "traddr": "10.0.0.2", 00:17:31.429 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:31.429 "adrfam": "ipv4", 00:17:31.429 "trsvcid": "8009", 00:17:31.429 "wait_for_attach": true, 00:17:31.429 "method": "bdev_nvme_start_discovery", 00:17:31.429 "req_id": 1 00:17:31.429 } 00:17:31.429 Got JSON-RPC error response 00:17:31.429 response: 00:17:31.429 { 00:17:31.429 "code": -17, 00:17:31.429 "message": "File exists" 00:17:31.429 } 00:17:31.429 05:16:21 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:31.429 05:16:21 -- common/autotest_common.sh@653 -- # es=1 00:17:31.429 05:16:21 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:31.429 05:16:21 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:31.429 05:16:21 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:31.429 05:16:21 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:17:31.429 05:16:21 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:31.429 05:16:21 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:31.429 05:16:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.429 05:16:21 -- host/discovery.sh@67 -- # sort 00:17:31.429 05:16:21 -- host/discovery.sh@67 -- # xargs 00:17:31.429 05:16:21 -- common/autotest_common.sh@10 -- # set +x 00:17:31.429 05:16:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.429 05:16:21 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:17:31.429 05:16:21 -- host/discovery.sh@153 -- # get_bdev_list 00:17:31.429 05:16:21 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:31.429 05:16:21 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:31.429 05:16:21 -- host/discovery.sh@55 -- # xargs 00:17:31.429 05:16:21 -- host/discovery.sh@55 -- # sort 00:17:31.429 05:16:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.429 05:16:21 -- common/autotest_common.sh@10 -- # set +x 00:17:31.429 05:16:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.429 05:16:21 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:31.429 05:16:21 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:31.429 05:16:21 -- common/autotest_common.sh@650 -- # local es=0 00:17:31.429 05:16:21 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:31.429 05:16:21 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:31.429 05:16:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:31.429 05:16:21 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:31.430 05:16:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:31.430 05:16:21 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:31.430 05:16:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.430 05:16:21 -- common/autotest_common.sh@10 -- # set +x 00:17:32.803 [2024-12-08 05:16:22.184506] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:32.803 [2024-12-08 05:16:22.184615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:32.803 [2024-12-08 05:16:22.184661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:32.803 [2024-12-08 05:16:22.184696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bd5c0 with addr=10.0.0.2, port=8010 00:17:32.803 [2024-12-08 05:16:22.184718] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:17:32.803 [2024-12-08 05:16:22.184728] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:17:32.803 [2024-12-08 05:16:22.184737] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:17:33.738 [2024-12-08 05:16:23.184523] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:33.738 [2024-12-08 05:16:23.184633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:33.738 [2024-12-08 05:16:23.184699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:33.738 [2024-12-08 05:16:23.184719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x57fbc0 with addr=10.0.0.2, port=8010 00:17:33.738 [2024-12-08 05:16:23.184739] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:17:33.738 [2024-12-08 05:16:23.184749] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:17:33.738 [2024-12-08 05:16:23.184758] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:17:34.674 [2024-12-08 05:16:24.184369] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:17:34.674 request: 00:17:34.674 { 00:17:34.674 "name": "nvme_second", 00:17:34.674 "trtype": "tcp", 00:17:34.674 "traddr": "10.0.0.2", 00:17:34.674 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:34.674 "adrfam": "ipv4", 00:17:34.674 "trsvcid": "8010", 00:17:34.674 "attach_timeout_ms": 3000, 00:17:34.674 "method": "bdev_nvme_start_discovery", 00:17:34.674 "req_id": 1 00:17:34.674 } 00:17:34.674 Got JSON-RPC error response 00:17:34.674 response: 00:17:34.674 { 00:17:34.674 "code": -110, 00:17:34.674 "message": "Connection timed out" 00:17:34.674 } 00:17:34.674 05:16:24 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:34.674 05:16:24 -- common/autotest_common.sh@653 -- # es=1 00:17:34.674 05:16:24 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:34.674 05:16:24 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:34.674 05:16:24 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:34.674 05:16:24 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:17:34.674 05:16:24 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:34.674 05:16:24 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:34.674 05:16:24 -- host/discovery.sh@67 -- # sort 00:17:34.674 05:16:24 -- host/discovery.sh@67 -- # xargs 00:17:34.674 05:16:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.674 05:16:24 -- common/autotest_common.sh@10 -- # set +x 00:17:34.674 05:16:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.674 05:16:24 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:17:34.674 05:16:24 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:17:34.674 05:16:24 -- host/discovery.sh@162 -- # kill 82852 00:17:34.674 05:16:24 -- host/discovery.sh@163 -- # nvmftestfini 00:17:34.674 05:16:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:34.674 05:16:24 -- nvmf/common.sh@116 -- # sync 00:17:34.674 05:16:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:34.674 05:16:24 -- nvmf/common.sh@119 -- # set +e 00:17:34.674 05:16:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:34.674 05:16:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:34.674 rmmod nvme_tcp 00:17:34.674 rmmod nvme_fabrics 00:17:34.674 rmmod nvme_keyring 00:17:34.674 05:16:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:34.674 05:16:24 -- nvmf/common.sh@123 -- # set -e 00:17:34.674 05:16:24 -- nvmf/common.sh@124 -- # return 0 00:17:34.674 05:16:24 -- nvmf/common.sh@477 -- # '[' -n 82820 ']' 00:17:34.674 05:16:24 -- nvmf/common.sh@478 -- # killprocess 82820 00:17:34.674 05:16:24 -- common/autotest_common.sh@936 -- # '[' -z 82820 ']' 00:17:34.674 05:16:24 -- common/autotest_common.sh@940 -- # kill -0 82820 00:17:34.674 05:16:24 -- common/autotest_common.sh@941 -- # uname 00:17:34.674 05:16:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:34.674 05:16:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82820 00:17:34.674 killing process with pid 82820 00:17:34.674 05:16:24 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:34.674 05:16:24 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:34.674 05:16:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82820' 00:17:34.674 05:16:24 -- common/autotest_common.sh@955 -- # kill 82820 00:17:34.674 05:16:24 -- common/autotest_common.sh@960 -- # wait 82820 00:17:34.932 05:16:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:34.932 05:16:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:34.932 05:16:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:34.932 05:16:24 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:34.932 05:16:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:34.932 05:16:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:34.932 05:16:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:34.932 05:16:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:34.932 05:16:24 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:34.932 ************************************ 00:17:34.932 END TEST nvmf_discovery 00:17:34.932 ************************************ 00:17:34.932 00:17:34.932 real 0m13.361s 00:17:34.932 user 0m25.406s 00:17:34.932 sys 0m2.146s 00:17:34.932 05:16:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:34.932 05:16:24 -- common/autotest_common.sh@10 -- # set +x 00:17:34.932 05:16:24 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:17:34.932 05:16:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:34.932 05:16:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:34.932 05:16:24 -- common/autotest_common.sh@10 -- # set +x 00:17:34.932 ************************************ 00:17:34.932 START TEST nvmf_discovery_remove_ifc 00:17:34.932 ************************************ 00:17:34.932 05:16:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:17:34.932 * Looking for test storage... 00:17:34.932 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:34.932 05:16:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:34.932 05:16:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:34.932 05:16:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:35.192 05:16:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:35.192 05:16:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:35.192 05:16:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:35.192 05:16:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:35.192 05:16:24 -- scripts/common.sh@335 -- # IFS=.-: 00:17:35.192 05:16:24 -- scripts/common.sh@335 -- # read -ra ver1 00:17:35.192 05:16:24 -- scripts/common.sh@336 -- # IFS=.-: 00:17:35.192 05:16:24 -- scripts/common.sh@336 -- # read -ra ver2 00:17:35.192 05:16:24 -- scripts/common.sh@337 -- # local 'op=<' 00:17:35.192 05:16:24 -- scripts/common.sh@339 -- # ver1_l=2 00:17:35.192 05:16:24 -- scripts/common.sh@340 -- # ver2_l=1 00:17:35.192 05:16:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:35.192 05:16:24 -- scripts/common.sh@343 -- # case "$op" in 00:17:35.192 05:16:24 -- scripts/common.sh@344 -- # : 1 00:17:35.192 05:16:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:35.192 05:16:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:35.192 05:16:24 -- scripts/common.sh@364 -- # decimal 1 00:17:35.192 05:16:24 -- scripts/common.sh@352 -- # local d=1 00:17:35.192 05:16:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:35.192 05:16:24 -- scripts/common.sh@354 -- # echo 1 00:17:35.192 05:16:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:35.192 05:16:24 -- scripts/common.sh@365 -- # decimal 2 00:17:35.192 05:16:24 -- scripts/common.sh@352 -- # local d=2 00:17:35.192 05:16:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:35.192 05:16:24 -- scripts/common.sh@354 -- # echo 2 00:17:35.192 05:16:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:35.192 05:16:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:35.192 05:16:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:35.192 05:16:24 -- scripts/common.sh@367 -- # return 0 00:17:35.192 05:16:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:35.192 05:16:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:35.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:35.192 --rc genhtml_branch_coverage=1 00:17:35.192 --rc genhtml_function_coverage=1 00:17:35.192 --rc genhtml_legend=1 00:17:35.192 --rc geninfo_all_blocks=1 00:17:35.192 --rc geninfo_unexecuted_blocks=1 00:17:35.192 00:17:35.192 ' 00:17:35.192 05:16:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:35.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:35.192 --rc genhtml_branch_coverage=1 00:17:35.192 --rc genhtml_function_coverage=1 00:17:35.192 --rc genhtml_legend=1 00:17:35.192 --rc geninfo_all_blocks=1 00:17:35.192 --rc geninfo_unexecuted_blocks=1 00:17:35.192 00:17:35.192 ' 00:17:35.192 05:16:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:35.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:35.192 --rc genhtml_branch_coverage=1 00:17:35.192 --rc genhtml_function_coverage=1 00:17:35.192 --rc genhtml_legend=1 00:17:35.192 --rc geninfo_all_blocks=1 00:17:35.192 --rc geninfo_unexecuted_blocks=1 00:17:35.192 00:17:35.192 ' 00:17:35.192 05:16:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:35.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:35.192 --rc genhtml_branch_coverage=1 00:17:35.192 --rc genhtml_function_coverage=1 00:17:35.192 --rc genhtml_legend=1 00:17:35.192 --rc geninfo_all_blocks=1 00:17:35.192 --rc geninfo_unexecuted_blocks=1 00:17:35.192 00:17:35.192 ' 00:17:35.192 05:16:24 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:35.192 05:16:24 -- nvmf/common.sh@7 -- # uname -s 00:17:35.192 05:16:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:35.192 05:16:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:35.192 05:16:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:35.192 05:16:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:35.192 05:16:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:35.192 05:16:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:35.192 05:16:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:35.192 05:16:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:35.192 05:16:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:35.192 05:16:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:35.192 05:16:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:17:35.192 05:16:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:17:35.192 05:16:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:35.192 05:16:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:35.192 05:16:24 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:35.192 05:16:24 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:35.192 05:16:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:35.192 05:16:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:35.192 05:16:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:35.192 05:16:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.192 05:16:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.192 05:16:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.192 05:16:24 -- paths/export.sh@5 -- # export PATH 00:17:35.192 05:16:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.193 05:16:24 -- nvmf/common.sh@46 -- # : 0 00:17:35.193 05:16:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:35.193 05:16:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:35.193 05:16:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:35.193 05:16:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:35.193 05:16:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:35.193 05:16:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:35.193 05:16:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:35.193 05:16:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:35.193 05:16:24 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:17:35.193 05:16:24 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:17:35.193 05:16:24 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:17:35.193 05:16:24 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:35.193 05:16:24 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:17:35.193 05:16:24 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:17:35.193 05:16:24 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:17:35.193 05:16:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:35.193 05:16:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:35.193 05:16:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:35.193 05:16:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:35.193 05:16:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:35.193 05:16:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.193 05:16:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:35.193 05:16:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.193 05:16:24 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:35.193 05:16:24 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:35.193 05:16:24 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:35.193 05:16:24 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:35.193 05:16:24 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:35.193 05:16:24 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:35.193 05:16:24 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:35.193 05:16:24 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:35.193 05:16:24 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:35.193 05:16:24 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:35.193 05:16:24 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:35.193 05:16:24 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:35.193 05:16:24 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:35.193 05:16:24 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:35.193 05:16:24 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:35.193 05:16:24 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:35.193 05:16:24 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:35.193 05:16:24 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:35.193 05:16:24 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:35.193 05:16:24 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:35.193 Cannot find device "nvmf_tgt_br" 00:17:35.193 05:16:24 -- nvmf/common.sh@154 -- # true 00:17:35.193 05:16:24 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:35.193 Cannot find device "nvmf_tgt_br2" 00:17:35.193 05:16:24 -- nvmf/common.sh@155 -- # true 00:17:35.193 05:16:24 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:35.193 05:16:24 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:35.193 Cannot find device "nvmf_tgt_br" 00:17:35.193 05:16:24 -- nvmf/common.sh@157 -- # true 00:17:35.193 05:16:24 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:35.193 Cannot find device "nvmf_tgt_br2" 00:17:35.193 05:16:24 -- nvmf/common.sh@158 -- # true 00:17:35.193 05:16:24 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:35.193 05:16:24 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:35.193 05:16:24 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:35.193 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:35.193 05:16:24 -- nvmf/common.sh@161 -- # true 00:17:35.193 05:16:24 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:35.193 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:35.450 05:16:24 -- nvmf/common.sh@162 -- # true 00:17:35.450 05:16:24 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:35.450 05:16:24 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:35.450 05:16:24 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:35.450 05:16:24 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:35.450 05:16:25 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:35.450 05:16:25 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:35.450 05:16:25 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:35.450 05:16:25 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:35.450 05:16:25 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:35.450 05:16:25 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:35.450 05:16:25 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:35.450 05:16:25 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:35.450 05:16:25 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:35.450 05:16:25 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:35.450 05:16:25 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:35.450 05:16:25 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:35.450 05:16:25 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:35.450 05:16:25 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:35.450 05:16:25 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:35.450 05:16:25 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:35.450 05:16:25 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:35.450 05:16:25 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:35.450 05:16:25 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:35.450 05:16:25 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:35.450 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:35.450 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:17:35.450 00:17:35.450 --- 10.0.0.2 ping statistics --- 00:17:35.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.450 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:17:35.450 05:16:25 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:35.450 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:35.450 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:17:35.450 00:17:35.450 --- 10.0.0.3 ping statistics --- 00:17:35.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.450 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:17:35.450 05:16:25 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:35.450 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:35.450 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:17:35.450 00:17:35.450 --- 10.0.0.1 ping statistics --- 00:17:35.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.450 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:17:35.450 05:16:25 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:35.450 05:16:25 -- nvmf/common.sh@421 -- # return 0 00:17:35.450 05:16:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:35.450 05:16:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:35.450 05:16:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:35.450 05:16:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:35.450 05:16:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:35.450 05:16:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:35.450 05:16:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:35.450 05:16:25 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:17:35.450 05:16:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:35.450 05:16:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:35.450 05:16:25 -- common/autotest_common.sh@10 -- # set +x 00:17:35.450 05:16:25 -- nvmf/common.sh@469 -- # nvmfpid=83339 00:17:35.450 05:16:25 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:35.450 05:16:25 -- nvmf/common.sh@470 -- # waitforlisten 83339 00:17:35.450 05:16:25 -- common/autotest_common.sh@829 -- # '[' -z 83339 ']' 00:17:35.450 05:16:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:35.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:35.450 05:16:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:35.450 05:16:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:35.450 05:16:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:35.450 05:16:25 -- common/autotest_common.sh@10 -- # set +x 00:17:35.707 [2024-12-08 05:16:25.274359] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:35.707 [2024-12-08 05:16:25.274648] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:35.707 [2024-12-08 05:16:25.419871] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.707 [2024-12-08 05:16:25.466828] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:35.707 [2024-12-08 05:16:25.467254] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:35.707 [2024-12-08 05:16:25.467284] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:35.707 [2024-12-08 05:16:25.467296] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:35.707 [2024-12-08 05:16:25.467329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:36.659 05:16:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:36.659 05:16:26 -- common/autotest_common.sh@862 -- # return 0 00:17:36.659 05:16:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:36.659 05:16:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:36.659 05:16:26 -- common/autotest_common.sh@10 -- # set +x 00:17:36.659 05:16:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:36.659 05:16:26 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:17:36.659 05:16:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.659 05:16:26 -- common/autotest_common.sh@10 -- # set +x 00:17:36.659 [2024-12-08 05:16:26.385641] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:36.659 [2024-12-08 05:16:26.393795] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:17:36.659 null0 00:17:36.659 [2024-12-08 05:16:26.425760] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:36.917 05:16:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.917 05:16:26 -- host/discovery_remove_ifc.sh@59 -- # hostpid=83377 00:17:36.917 05:16:26 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 83377 /tmp/host.sock 00:17:36.917 05:16:26 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:17:36.917 05:16:26 -- common/autotest_common.sh@829 -- # '[' -z 83377 ']' 00:17:36.917 05:16:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:17:36.917 05:16:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:36.917 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:17:36.917 05:16:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:17:36.917 05:16:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:36.917 05:16:26 -- common/autotest_common.sh@10 -- # set +x 00:17:36.917 [2024-12-08 05:16:26.504735] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:36.917 [2024-12-08 05:16:26.505033] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83377 ] 00:17:36.917 [2024-12-08 05:16:26.688464] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.174 [2024-12-08 05:16:26.744074] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:37.174 [2024-12-08 05:16:26.744260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.110 05:16:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:38.110 05:16:27 -- common/autotest_common.sh@862 -- # return 0 00:17:38.110 05:16:27 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:38.110 05:16:27 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:17:38.110 05:16:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.110 05:16:27 -- common/autotest_common.sh@10 -- # set +x 00:17:38.110 05:16:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.110 05:16:27 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:17:38.110 05:16:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.110 05:16:27 -- common/autotest_common.sh@10 -- # set +x 00:17:38.110 05:16:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.110 05:16:27 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:17:38.110 05:16:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.110 05:16:27 -- common/autotest_common.sh@10 -- # set +x 00:17:39.047 [2024-12-08 05:16:28.619184] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:39.047 [2024-12-08 05:16:28.619255] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:39.047 [2024-12-08 05:16:28.619288] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:39.047 [2024-12-08 05:16:28.625272] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:17:39.047 [2024-12-08 05:16:28.681358] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:17:39.047 [2024-12-08 05:16:28.681609] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:17:39.047 [2024-12-08 05:16:28.681701] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:17:39.047 [2024-12-08 05:16:28.681857] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:17:39.047 [2024-12-08 05:16:28.682012] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:39.047 05:16:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.047 05:16:28 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:17:39.047 05:16:28 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:39.047 05:16:28 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:39.047 05:16:28 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:39.047 05:16:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.047 [2024-12-08 05:16:28.687467] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x225b2c0 was disconnected and freed. delete nvme_qpair. 00:17:39.047 05:16:28 -- common/autotest_common.sh@10 -- # set +x 00:17:39.047 05:16:28 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:39.047 05:16:28 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:39.047 05:16:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.047 05:16:28 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:17:39.047 05:16:28 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:17:39.047 05:16:28 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:17:39.047 05:16:28 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:17:39.047 05:16:28 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:39.047 05:16:28 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:39.047 05:16:28 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:39.048 05:16:28 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:39.048 05:16:28 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:39.048 05:16:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.048 05:16:28 -- common/autotest_common.sh@10 -- # set +x 00:17:39.048 05:16:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.048 05:16:28 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:39.048 05:16:28 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:40.421 05:16:29 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:40.421 05:16:29 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:40.421 05:16:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.421 05:16:29 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:40.421 05:16:29 -- common/autotest_common.sh@10 -- # set +x 00:17:40.421 05:16:29 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:40.421 05:16:29 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:40.421 05:16:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.421 05:16:29 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:40.421 05:16:29 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:41.355 05:16:30 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:41.355 05:16:30 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:41.355 05:16:30 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:41.355 05:16:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.355 05:16:30 -- common/autotest_common.sh@10 -- # set +x 00:17:41.355 05:16:30 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:41.355 05:16:30 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:41.355 05:16:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.355 05:16:30 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:41.355 05:16:30 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:42.288 05:16:31 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:42.288 05:16:31 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:42.288 05:16:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.288 05:16:31 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:42.288 05:16:31 -- common/autotest_common.sh@10 -- # set +x 00:17:42.288 05:16:31 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:42.288 05:16:31 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:42.288 05:16:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.288 05:16:31 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:42.288 05:16:31 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:43.222 05:16:33 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:43.222 05:16:33 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:43.222 05:16:33 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:43.222 05:16:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.222 05:16:33 -- common/autotest_common.sh@10 -- # set +x 00:17:43.222 05:16:33 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:43.222 05:16:33 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:43.480 05:16:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.480 05:16:33 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:43.480 05:16:33 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:44.415 05:16:34 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:44.415 05:16:34 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:44.415 05:16:34 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:44.415 05:16:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.415 05:16:34 -- common/autotest_common.sh@10 -- # set +x 00:17:44.415 05:16:34 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:44.415 05:16:34 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:44.415 05:16:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.415 [2024-12-08 05:16:34.109501] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:17:44.415 [2024-12-08 05:16:34.109568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:44.415 [2024-12-08 05:16:34.109586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.415 [2024-12-08 05:16:34.109599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:44.415 [2024-12-08 05:16:34.109609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.415 [2024-12-08 05:16:34.109619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:44.415 [2024-12-08 05:16:34.109628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.415 [2024-12-08 05:16:34.109638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:44.415 [2024-12-08 05:16:34.109648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.415 [2024-12-08 05:16:34.109658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:44.415 [2024-12-08 05:16:34.109667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.415 [2024-12-08 05:16:34.109694] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x221f6c0 is same with the state(5) to be set 00:17:44.415 [2024-12-08 05:16:34.119496] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x221f6c0 (9): Bad file descriptor 00:17:44.415 05:16:34 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:44.415 05:16:34 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:44.415 [2024-12-08 05:16:34.129538] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:45.350 05:16:35 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:45.350 05:16:35 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:45.350 05:16:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.350 05:16:35 -- common/autotest_common.sh@10 -- # set +x 00:17:45.350 05:16:35 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:45.350 05:16:35 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:45.350 05:16:35 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:45.607 [2024-12-08 05:16:35.188762] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:17:46.540 [2024-12-08 05:16:36.212725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:17:47.541 [2024-12-08 05:16:37.236719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:17:47.541 [2024-12-08 05:16:37.236814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221f6c0 with addr=10.0.0.2, port=4420 00:17:47.541 [2024-12-08 05:16:37.236841] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x221f6c0 is same with the state(5) to be set 00:17:47.541 [2024-12-08 05:16:37.236888] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:47.541 [2024-12-08 05:16:37.236905] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:47.541 [2024-12-08 05:16:37.236917] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:47.541 [2024-12-08 05:16:37.236931] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:17:47.541 [2024-12-08 05:16:37.238180] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x221f6c0 (9): Bad file descriptor 00:17:47.541 [2024-12-08 05:16:37.238243] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:47.541 [2024-12-08 05:16:37.238287] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:17:47.542 [2024-12-08 05:16:37.238337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:47.542 [2024-12-08 05:16:37.238360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.542 [2024-12-08 05:16:37.238382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:47.542 [2024-12-08 05:16:37.238396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.542 [2024-12-08 05:16:37.238416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:47.542 [2024-12-08 05:16:37.238434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.542 [2024-12-08 05:16:37.238453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:47.542 [2024-12-08 05:16:37.238467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.542 [2024-12-08 05:16:37.238486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:47.542 [2024-12-08 05:16:37.238501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.542 [2024-12-08 05:16:37.238517] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:17:47.542 [2024-12-08 05:16:37.238543] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x221fad0 (9): Bad file descriptor 00:17:47.542 [2024-12-08 05:16:37.239285] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:17:47.542 [2024-12-08 05:16:37.239317] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:17:47.542 05:16:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.542 05:16:37 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:47.542 05:16:37 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:48.912 05:16:38 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:48.912 05:16:38 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:48.912 05:16:38 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:48.912 05:16:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.912 05:16:38 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:48.912 05:16:38 -- common/autotest_common.sh@10 -- # set +x 00:17:48.912 05:16:38 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:48.912 05:16:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.912 05:16:38 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:17:48.912 05:16:38 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:48.912 05:16:38 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:48.912 05:16:38 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:17:48.912 05:16:38 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:48.912 05:16:38 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:48.912 05:16:38 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:48.912 05:16:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.912 05:16:38 -- common/autotest_common.sh@10 -- # set +x 00:17:48.912 05:16:38 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:48.912 05:16:38 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:48.912 05:16:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.912 05:16:38 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:17:48.912 05:16:38 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:49.476 [2024-12-08 05:16:39.244787] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:49.476 [2024-12-08 05:16:39.245004] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:49.476 [2024-12-08 05:16:39.245069] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:49.476 [2024-12-08 05:16:39.250826] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:17:49.734 [2024-12-08 05:16:39.306030] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:17:49.734 [2024-12-08 05:16:39.306094] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:17:49.734 [2024-12-08 05:16:39.306117] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:17:49.734 [2024-12-08 05:16:39.306133] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:17:49.734 [2024-12-08 05:16:39.306143] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:49.734 [2024-12-08 05:16:39.313428] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x222c930 was disconnected and freed. delete nvme_qpair. 00:17:49.734 05:16:39 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:49.734 05:16:39 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:49.734 05:16:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.734 05:16:39 -- common/autotest_common.sh@10 -- # set +x 00:17:49.734 05:16:39 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:49.734 05:16:39 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:49.734 05:16:39 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:49.734 05:16:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.734 05:16:39 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:17:49.734 05:16:39 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:17:49.734 05:16:39 -- host/discovery_remove_ifc.sh@90 -- # killprocess 83377 00:17:49.734 05:16:39 -- common/autotest_common.sh@936 -- # '[' -z 83377 ']' 00:17:49.734 05:16:39 -- common/autotest_common.sh@940 -- # kill -0 83377 00:17:49.734 05:16:39 -- common/autotest_common.sh@941 -- # uname 00:17:49.734 05:16:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:49.734 05:16:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83377 00:17:49.734 killing process with pid 83377 00:17:49.734 05:16:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:49.734 05:16:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:49.734 05:16:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83377' 00:17:49.734 05:16:39 -- common/autotest_common.sh@955 -- # kill 83377 00:17:49.734 05:16:39 -- common/autotest_common.sh@960 -- # wait 83377 00:17:49.991 05:16:39 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:17:49.991 05:16:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:49.991 05:16:39 -- nvmf/common.sh@116 -- # sync 00:17:49.991 05:16:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:49.991 05:16:39 -- nvmf/common.sh@119 -- # set +e 00:17:49.991 05:16:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:49.991 05:16:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:49.991 rmmod nvme_tcp 00:17:49.991 rmmod nvme_fabrics 00:17:49.991 rmmod nvme_keyring 00:17:49.991 05:16:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:49.991 05:16:39 -- nvmf/common.sh@123 -- # set -e 00:17:49.991 05:16:39 -- nvmf/common.sh@124 -- # return 0 00:17:49.991 05:16:39 -- nvmf/common.sh@477 -- # '[' -n 83339 ']' 00:17:49.991 05:16:39 -- nvmf/common.sh@478 -- # killprocess 83339 00:17:49.991 05:16:39 -- common/autotest_common.sh@936 -- # '[' -z 83339 ']' 00:17:49.991 05:16:39 -- common/autotest_common.sh@940 -- # kill -0 83339 00:17:49.991 05:16:39 -- common/autotest_common.sh@941 -- # uname 00:17:49.991 05:16:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:49.991 05:16:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83339 00:17:50.248 killing process with pid 83339 00:17:50.248 05:16:39 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:50.248 05:16:39 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:50.248 05:16:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83339' 00:17:50.248 05:16:39 -- common/autotest_common.sh@955 -- # kill 83339 00:17:50.248 05:16:39 -- common/autotest_common.sh@960 -- # wait 83339 00:17:50.248 05:16:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:50.248 05:16:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:50.248 05:16:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:50.248 05:16:39 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:50.248 05:16:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:50.248 05:16:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:50.248 05:16:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:50.248 05:16:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:50.248 05:16:39 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:50.248 ************************************ 00:17:50.248 END TEST nvmf_discovery_remove_ifc 00:17:50.248 ************************************ 00:17:50.248 00:17:50.248 real 0m15.325s 00:17:50.248 user 0m24.589s 00:17:50.248 sys 0m2.513s 00:17:50.248 05:16:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:50.248 05:16:39 -- common/autotest_common.sh@10 -- # set +x 00:17:50.248 05:16:40 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:17:50.249 05:16:40 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:50.249 05:16:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:50.249 05:16:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:50.249 05:16:40 -- common/autotest_common.sh@10 -- # set +x 00:17:50.249 ************************************ 00:17:50.249 START TEST nvmf_digest 00:17:50.249 ************************************ 00:17:50.249 05:16:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:50.507 * Looking for test storage... 00:17:50.507 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:50.507 05:16:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:50.507 05:16:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:50.507 05:16:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:50.507 05:16:40 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:50.507 05:16:40 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:50.507 05:16:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:50.507 05:16:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:50.507 05:16:40 -- scripts/common.sh@335 -- # IFS=.-: 00:17:50.507 05:16:40 -- scripts/common.sh@335 -- # read -ra ver1 00:17:50.507 05:16:40 -- scripts/common.sh@336 -- # IFS=.-: 00:17:50.507 05:16:40 -- scripts/common.sh@336 -- # read -ra ver2 00:17:50.507 05:16:40 -- scripts/common.sh@337 -- # local 'op=<' 00:17:50.507 05:16:40 -- scripts/common.sh@339 -- # ver1_l=2 00:17:50.507 05:16:40 -- scripts/common.sh@340 -- # ver2_l=1 00:17:50.507 05:16:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:50.507 05:16:40 -- scripts/common.sh@343 -- # case "$op" in 00:17:50.507 05:16:40 -- scripts/common.sh@344 -- # : 1 00:17:50.507 05:16:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:50.507 05:16:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:50.507 05:16:40 -- scripts/common.sh@364 -- # decimal 1 00:17:50.507 05:16:40 -- scripts/common.sh@352 -- # local d=1 00:17:50.507 05:16:40 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:50.507 05:16:40 -- scripts/common.sh@354 -- # echo 1 00:17:50.507 05:16:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:50.507 05:16:40 -- scripts/common.sh@365 -- # decimal 2 00:17:50.507 05:16:40 -- scripts/common.sh@352 -- # local d=2 00:17:50.507 05:16:40 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:50.507 05:16:40 -- scripts/common.sh@354 -- # echo 2 00:17:50.507 05:16:40 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:50.507 05:16:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:50.507 05:16:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:50.507 05:16:40 -- scripts/common.sh@367 -- # return 0 00:17:50.507 05:16:40 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:50.507 05:16:40 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:50.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.507 --rc genhtml_branch_coverage=1 00:17:50.507 --rc genhtml_function_coverage=1 00:17:50.507 --rc genhtml_legend=1 00:17:50.507 --rc geninfo_all_blocks=1 00:17:50.507 --rc geninfo_unexecuted_blocks=1 00:17:50.507 00:17:50.507 ' 00:17:50.507 05:16:40 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:50.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.507 --rc genhtml_branch_coverage=1 00:17:50.507 --rc genhtml_function_coverage=1 00:17:50.507 --rc genhtml_legend=1 00:17:50.507 --rc geninfo_all_blocks=1 00:17:50.507 --rc geninfo_unexecuted_blocks=1 00:17:50.507 00:17:50.507 ' 00:17:50.507 05:16:40 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:50.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.508 --rc genhtml_branch_coverage=1 00:17:50.508 --rc genhtml_function_coverage=1 00:17:50.508 --rc genhtml_legend=1 00:17:50.508 --rc geninfo_all_blocks=1 00:17:50.508 --rc geninfo_unexecuted_blocks=1 00:17:50.508 00:17:50.508 ' 00:17:50.508 05:16:40 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:50.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.508 --rc genhtml_branch_coverage=1 00:17:50.508 --rc genhtml_function_coverage=1 00:17:50.508 --rc genhtml_legend=1 00:17:50.508 --rc geninfo_all_blocks=1 00:17:50.508 --rc geninfo_unexecuted_blocks=1 00:17:50.508 00:17:50.508 ' 00:17:50.508 05:16:40 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:50.508 05:16:40 -- nvmf/common.sh@7 -- # uname -s 00:17:50.508 05:16:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:50.508 05:16:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:50.508 05:16:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:50.508 05:16:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:50.508 05:16:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:50.508 05:16:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:50.508 05:16:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:50.508 05:16:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:50.508 05:16:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:50.508 05:16:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:50.508 05:16:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:17:50.508 05:16:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:17:50.508 05:16:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:50.508 05:16:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:50.508 05:16:40 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:50.508 05:16:40 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:50.508 05:16:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:50.508 05:16:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:50.508 05:16:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:50.508 05:16:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.508 05:16:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.508 05:16:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.508 05:16:40 -- paths/export.sh@5 -- # export PATH 00:17:50.508 05:16:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.508 05:16:40 -- nvmf/common.sh@46 -- # : 0 00:17:50.508 05:16:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:50.508 05:16:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:50.508 05:16:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:50.508 05:16:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:50.508 05:16:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:50.508 05:16:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:50.508 05:16:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:50.508 05:16:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:50.508 05:16:40 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:50.508 05:16:40 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:17:50.508 05:16:40 -- host/digest.sh@16 -- # runtime=2 00:17:50.508 05:16:40 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:17:50.508 05:16:40 -- host/digest.sh@132 -- # nvmftestinit 00:17:50.508 05:16:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:50.508 05:16:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:50.508 05:16:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:50.508 05:16:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:50.508 05:16:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:50.508 05:16:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:50.508 05:16:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:50.508 05:16:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:50.508 05:16:40 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:50.508 05:16:40 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:50.508 05:16:40 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:50.508 05:16:40 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:50.508 05:16:40 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:50.508 05:16:40 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:50.508 05:16:40 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:50.508 05:16:40 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:50.508 05:16:40 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:50.508 05:16:40 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:50.508 05:16:40 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:50.508 05:16:40 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:50.508 05:16:40 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:50.508 05:16:40 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:50.508 05:16:40 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:50.508 05:16:40 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:50.508 05:16:40 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:50.508 05:16:40 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:50.508 05:16:40 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:50.508 05:16:40 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:50.508 Cannot find device "nvmf_tgt_br" 00:17:50.508 05:16:40 -- nvmf/common.sh@154 -- # true 00:17:50.508 05:16:40 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:50.508 Cannot find device "nvmf_tgt_br2" 00:17:50.508 05:16:40 -- nvmf/common.sh@155 -- # true 00:17:50.508 05:16:40 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:50.508 05:16:40 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:50.508 Cannot find device "nvmf_tgt_br" 00:17:50.508 05:16:40 -- nvmf/common.sh@157 -- # true 00:17:50.508 05:16:40 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:50.508 Cannot find device "nvmf_tgt_br2" 00:17:50.508 05:16:40 -- nvmf/common.sh@158 -- # true 00:17:50.508 05:16:40 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:50.766 05:16:40 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:50.766 05:16:40 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:50.766 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:50.766 05:16:40 -- nvmf/common.sh@161 -- # true 00:17:50.766 05:16:40 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:50.766 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:50.766 05:16:40 -- nvmf/common.sh@162 -- # true 00:17:50.766 05:16:40 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:50.766 05:16:40 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:50.766 05:16:40 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:50.766 05:16:40 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:50.766 05:16:40 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:50.766 05:16:40 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:50.766 05:16:40 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:50.766 05:16:40 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:50.766 05:16:40 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:50.766 05:16:40 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:50.766 05:16:40 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:50.766 05:16:40 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:50.766 05:16:40 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:50.766 05:16:40 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:50.766 05:16:40 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:50.766 05:16:40 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:50.766 05:16:40 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:50.766 05:16:40 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:50.766 05:16:40 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:50.766 05:16:40 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:50.766 05:16:40 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:50.766 05:16:40 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:50.766 05:16:40 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:50.766 05:16:40 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:50.766 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:50.766 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:17:50.766 00:17:50.766 --- 10.0.0.2 ping statistics --- 00:17:50.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.766 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:17:50.766 05:16:40 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:50.766 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:50.766 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:17:50.766 00:17:50.766 --- 10.0.0.3 ping statistics --- 00:17:50.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.766 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:17:50.766 05:16:40 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:50.766 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:50.766 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:17:50.766 00:17:50.766 --- 10.0.0.1 ping statistics --- 00:17:50.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.766 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:17:50.766 05:16:40 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:50.766 05:16:40 -- nvmf/common.sh@421 -- # return 0 00:17:50.766 05:16:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:50.766 05:16:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:50.766 05:16:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:50.766 05:16:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:50.766 05:16:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:50.766 05:16:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:50.766 05:16:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:51.023 05:16:40 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:51.023 05:16:40 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:17:51.023 05:16:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:51.023 05:16:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:51.023 05:16:40 -- common/autotest_common.sh@10 -- # set +x 00:17:51.023 ************************************ 00:17:51.023 START TEST nvmf_digest_clean 00:17:51.023 ************************************ 00:17:51.023 05:16:40 -- common/autotest_common.sh@1114 -- # run_digest 00:17:51.023 05:16:40 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:17:51.023 05:16:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:51.023 05:16:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:51.023 05:16:40 -- common/autotest_common.sh@10 -- # set +x 00:17:51.023 05:16:40 -- nvmf/common.sh@469 -- # nvmfpid=83791 00:17:51.023 05:16:40 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:51.023 05:16:40 -- nvmf/common.sh@470 -- # waitforlisten 83791 00:17:51.023 05:16:40 -- common/autotest_common.sh@829 -- # '[' -z 83791 ']' 00:17:51.023 05:16:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.023 05:16:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:51.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:51.023 05:16:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.023 05:16:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:51.023 05:16:40 -- common/autotest_common.sh@10 -- # set +x 00:17:51.023 [2024-12-08 05:16:40.626057] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:51.023 [2024-12-08 05:16:40.626138] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:51.023 [2024-12-08 05:16:40.760596] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.023 [2024-12-08 05:16:40.794004] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:51.023 [2024-12-08 05:16:40.794149] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:51.023 [2024-12-08 05:16:40.794162] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:51.024 [2024-12-08 05:16:40.794171] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:51.024 [2024-12-08 05:16:40.794197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.281 05:16:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:51.281 05:16:40 -- common/autotest_common.sh@862 -- # return 0 00:17:51.281 05:16:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:51.281 05:16:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:51.281 05:16:40 -- common/autotest_common.sh@10 -- # set +x 00:17:51.281 05:16:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:51.281 05:16:40 -- host/digest.sh@120 -- # common_target_config 00:17:51.281 05:16:40 -- host/digest.sh@43 -- # rpc_cmd 00:17:51.281 05:16:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.281 05:16:40 -- common/autotest_common.sh@10 -- # set +x 00:17:51.281 null0 00:17:51.281 [2024-12-08 05:16:40.939337] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:51.281 [2024-12-08 05:16:40.963479] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:51.281 05:16:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.281 05:16:40 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:17:51.281 05:16:40 -- host/digest.sh@77 -- # local rw bs qd 00:17:51.281 05:16:40 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:51.281 05:16:40 -- host/digest.sh@80 -- # rw=randread 00:17:51.281 05:16:40 -- host/digest.sh@80 -- # bs=4096 00:17:51.281 05:16:40 -- host/digest.sh@80 -- # qd=128 00:17:51.281 05:16:40 -- host/digest.sh@82 -- # bperfpid=83821 00:17:51.281 05:16:40 -- host/digest.sh@83 -- # waitforlisten 83821 /var/tmp/bperf.sock 00:17:51.281 05:16:40 -- common/autotest_common.sh@829 -- # '[' -z 83821 ']' 00:17:51.281 05:16:40 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:51.281 05:16:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:51.281 05:16:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:51.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:51.281 05:16:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:51.281 05:16:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:51.281 05:16:40 -- common/autotest_common.sh@10 -- # set +x 00:17:51.281 [2024-12-08 05:16:41.009519] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:51.281 [2024-12-08 05:16:41.009612] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83821 ] 00:17:51.539 [2024-12-08 05:16:41.145894] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.539 [2024-12-08 05:16:41.184978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:51.539 05:16:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:51.539 05:16:41 -- common/autotest_common.sh@862 -- # return 0 00:17:51.539 05:16:41 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:17:51.539 05:16:41 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:17:51.539 05:16:41 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:52.115 05:16:41 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:52.115 05:16:41 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:52.375 nvme0n1 00:17:52.376 05:16:41 -- host/digest.sh@91 -- # bperf_py perform_tests 00:17:52.376 05:16:41 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:52.376 Running I/O for 2 seconds... 00:17:54.908 00:17:54.908 Latency(us) 00:17:54.908 [2024-12-08T05:16:44.694Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:54.908 [2024-12-08T05:16:44.694Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:54.908 nvme0n1 : 2.01 14609.42 57.07 0.00 0.00 8756.29 8102.63 19899.11 00:17:54.908 [2024-12-08T05:16:44.694Z] =================================================================================================================== 00:17:54.908 [2024-12-08T05:16:44.694Z] Total : 14609.42 57.07 0.00 0.00 8756.29 8102.63 19899.11 00:17:54.908 0 00:17:54.908 05:16:44 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:17:54.908 05:16:44 -- host/digest.sh@92 -- # get_accel_stats 00:17:54.908 05:16:44 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:54.908 05:16:44 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:54.908 05:16:44 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:54.908 | select(.opcode=="crc32c") 00:17:54.908 | "\(.module_name) \(.executed)"' 00:17:54.908 05:16:44 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:17:54.908 05:16:44 -- host/digest.sh@93 -- # exp_module=software 00:17:54.908 05:16:44 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:17:54.908 05:16:44 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:54.908 05:16:44 -- host/digest.sh@97 -- # killprocess 83821 00:17:54.908 05:16:44 -- common/autotest_common.sh@936 -- # '[' -z 83821 ']' 00:17:54.908 05:16:44 -- common/autotest_common.sh@940 -- # kill -0 83821 00:17:54.908 05:16:44 -- common/autotest_common.sh@941 -- # uname 00:17:54.908 05:16:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:54.908 05:16:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83821 00:17:54.908 05:16:44 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:54.908 05:16:44 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:54.908 killing process with pid 83821 00:17:54.908 05:16:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83821' 00:17:54.908 Received shutdown signal, test time was about 2.000000 seconds 00:17:54.908 00:17:54.908 Latency(us) 00:17:54.908 [2024-12-08T05:16:44.694Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:54.908 [2024-12-08T05:16:44.694Z] =================================================================================================================== 00:17:54.908 [2024-12-08T05:16:44.694Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:54.908 05:16:44 -- common/autotest_common.sh@955 -- # kill 83821 00:17:54.908 05:16:44 -- common/autotest_common.sh@960 -- # wait 83821 00:17:54.908 05:16:44 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:17:54.908 05:16:44 -- host/digest.sh@77 -- # local rw bs qd 00:17:54.908 05:16:44 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:54.908 05:16:44 -- host/digest.sh@80 -- # rw=randread 00:17:54.908 05:16:44 -- host/digest.sh@80 -- # bs=131072 00:17:54.908 05:16:44 -- host/digest.sh@80 -- # qd=16 00:17:54.908 05:16:44 -- host/digest.sh@82 -- # bperfpid=83868 00:17:54.908 05:16:44 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:54.908 05:16:44 -- host/digest.sh@83 -- # waitforlisten 83868 /var/tmp/bperf.sock 00:17:54.908 05:16:44 -- common/autotest_common.sh@829 -- # '[' -z 83868 ']' 00:17:54.908 05:16:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:54.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:54.908 05:16:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:54.908 05:16:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:54.908 05:16:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:54.908 05:16:44 -- common/autotest_common.sh@10 -- # set +x 00:17:54.908 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:54.908 Zero copy mechanism will not be used. 00:17:54.908 [2024-12-08 05:16:44.647700] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:54.908 [2024-12-08 05:16:44.647818] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83868 ] 00:17:55.167 [2024-12-08 05:16:44.792966] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.167 [2024-12-08 05:16:44.827485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:55.167 05:16:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:55.167 05:16:44 -- common/autotest_common.sh@862 -- # return 0 00:17:55.167 05:16:44 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:17:55.167 05:16:44 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:17:55.167 05:16:44 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:55.733 05:16:45 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:55.733 05:16:45 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:55.733 nvme0n1 00:17:55.733 05:16:45 -- host/digest.sh@91 -- # bperf_py perform_tests 00:17:55.733 05:16:45 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:55.991 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:55.991 Zero copy mechanism will not be used. 00:17:55.991 Running I/O for 2 seconds... 00:17:57.907 00:17:57.907 Latency(us) 00:17:57.907 [2024-12-08T05:16:47.693Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.907 [2024-12-08T05:16:47.693Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:57.907 nvme0n1 : 2.00 6968.17 871.02 0.00 0.00 2292.85 2055.45 7626.01 00:17:57.907 [2024-12-08T05:16:47.693Z] =================================================================================================================== 00:17:57.907 [2024-12-08T05:16:47.693Z] Total : 6968.17 871.02 0.00 0.00 2292.85 2055.45 7626.01 00:17:57.907 0 00:17:57.907 05:16:47 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:17:57.907 05:16:47 -- host/digest.sh@92 -- # get_accel_stats 00:17:57.907 05:16:47 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:57.907 05:16:47 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:57.907 05:16:47 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:57.907 | select(.opcode=="crc32c") 00:17:57.907 | "\(.module_name) \(.executed)"' 00:17:58.472 05:16:48 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:17:58.472 05:16:48 -- host/digest.sh@93 -- # exp_module=software 00:17:58.472 05:16:48 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:17:58.472 05:16:48 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:58.472 05:16:48 -- host/digest.sh@97 -- # killprocess 83868 00:17:58.472 05:16:48 -- common/autotest_common.sh@936 -- # '[' -z 83868 ']' 00:17:58.472 05:16:48 -- common/autotest_common.sh@940 -- # kill -0 83868 00:17:58.472 05:16:48 -- common/autotest_common.sh@941 -- # uname 00:17:58.472 05:16:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:58.472 05:16:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83868 00:17:58.472 05:16:48 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:58.472 05:16:48 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:58.472 killing process with pid 83868 00:17:58.473 05:16:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83868' 00:17:58.473 05:16:48 -- common/autotest_common.sh@955 -- # kill 83868 00:17:58.473 Received shutdown signal, test time was about 2.000000 seconds 00:17:58.473 00:17:58.473 Latency(us) 00:17:58.473 [2024-12-08T05:16:48.259Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:58.473 [2024-12-08T05:16:48.259Z] =================================================================================================================== 00:17:58.473 [2024-12-08T05:16:48.259Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:58.473 05:16:48 -- common/autotest_common.sh@960 -- # wait 83868 00:17:58.473 05:16:48 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:17:58.473 05:16:48 -- host/digest.sh@77 -- # local rw bs qd 00:17:58.473 05:16:48 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:58.473 05:16:48 -- host/digest.sh@80 -- # rw=randwrite 00:17:58.473 05:16:48 -- host/digest.sh@80 -- # bs=4096 00:17:58.473 05:16:48 -- host/digest.sh@80 -- # qd=128 00:17:58.473 05:16:48 -- host/digest.sh@82 -- # bperfpid=83921 00:17:58.473 05:16:48 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:58.473 05:16:48 -- host/digest.sh@83 -- # waitforlisten 83921 /var/tmp/bperf.sock 00:17:58.473 05:16:48 -- common/autotest_common.sh@829 -- # '[' -z 83921 ']' 00:17:58.473 05:16:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:58.473 05:16:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:58.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:58.473 05:16:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:58.473 05:16:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:58.473 05:16:48 -- common/autotest_common.sh@10 -- # set +x 00:17:58.731 [2024-12-08 05:16:48.302026] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:58.731 [2024-12-08 05:16:48.302146] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83921 ] 00:17:58.731 [2024-12-08 05:16:48.448015] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.731 [2024-12-08 05:16:48.486177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:59.042 05:16:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:59.042 05:16:48 -- common/autotest_common.sh@862 -- # return 0 00:17:59.042 05:16:48 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:17:59.042 05:16:48 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:17:59.042 05:16:48 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:59.300 05:16:48 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:59.300 05:16:48 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:59.558 nvme0n1 00:17:59.558 05:16:49 -- host/digest.sh@91 -- # bperf_py perform_tests 00:17:59.558 05:16:49 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:59.815 Running I/O for 2 seconds... 00:18:01.715 00:18:01.715 Latency(us) 00:18:01.715 [2024-12-08T05:16:51.501Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:01.715 [2024-12-08T05:16:51.501Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:01.715 nvme0n1 : 2.01 15507.71 60.58 0.00 0.00 8247.81 6345.08 17992.61 00:18:01.715 [2024-12-08T05:16:51.501Z] =================================================================================================================== 00:18:01.715 [2024-12-08T05:16:51.501Z] Total : 15507.71 60.58 0.00 0.00 8247.81 6345.08 17992.61 00:18:01.715 0 00:18:01.715 05:16:51 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:18:01.715 05:16:51 -- host/digest.sh@92 -- # get_accel_stats 00:18:01.715 05:16:51 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:01.715 | select(.opcode=="crc32c") 00:18:01.715 | "\(.module_name) \(.executed)"' 00:18:01.715 05:16:51 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:01.715 05:16:51 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:01.973 05:16:51 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:18:01.973 05:16:51 -- host/digest.sh@93 -- # exp_module=software 00:18:01.973 05:16:51 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:18:01.973 05:16:51 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:01.973 05:16:51 -- host/digest.sh@97 -- # killprocess 83921 00:18:01.973 05:16:51 -- common/autotest_common.sh@936 -- # '[' -z 83921 ']' 00:18:01.973 05:16:51 -- common/autotest_common.sh@940 -- # kill -0 83921 00:18:01.973 05:16:51 -- common/autotest_common.sh@941 -- # uname 00:18:02.230 05:16:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:02.230 05:16:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83921 00:18:02.230 killing process with pid 83921 00:18:02.230 Received shutdown signal, test time was about 2.000000 seconds 00:18:02.230 00:18:02.230 Latency(us) 00:18:02.230 [2024-12-08T05:16:52.016Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:02.230 [2024-12-08T05:16:52.016Z] =================================================================================================================== 00:18:02.230 [2024-12-08T05:16:52.016Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:02.230 05:16:51 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:02.230 05:16:51 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:02.230 05:16:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83921' 00:18:02.230 05:16:51 -- common/autotest_common.sh@955 -- # kill 83921 00:18:02.230 05:16:51 -- common/autotest_common.sh@960 -- # wait 83921 00:18:02.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:02.230 05:16:51 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:18:02.230 05:16:51 -- host/digest.sh@77 -- # local rw bs qd 00:18:02.230 05:16:51 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:02.230 05:16:51 -- host/digest.sh@80 -- # rw=randwrite 00:18:02.230 05:16:51 -- host/digest.sh@80 -- # bs=131072 00:18:02.230 05:16:51 -- host/digest.sh@80 -- # qd=16 00:18:02.230 05:16:51 -- host/digest.sh@82 -- # bperfpid=83976 00:18:02.230 05:16:51 -- host/digest.sh@83 -- # waitforlisten 83976 /var/tmp/bperf.sock 00:18:02.230 05:16:51 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:18:02.230 05:16:51 -- common/autotest_common.sh@829 -- # '[' -z 83976 ']' 00:18:02.230 05:16:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:02.230 05:16:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:02.230 05:16:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:02.230 05:16:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:02.230 05:16:51 -- common/autotest_common.sh@10 -- # set +x 00:18:02.230 [2024-12-08 05:16:51.993421] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:02.230 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:02.230 Zero copy mechanism will not be used. 00:18:02.230 [2024-12-08 05:16:51.993546] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83976 ] 00:18:02.488 [2024-12-08 05:16:52.131724] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.488 [2024-12-08 05:16:52.167169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:02.746 05:16:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:02.746 05:16:52 -- common/autotest_common.sh@862 -- # return 0 00:18:02.746 05:16:52 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:18:02.746 05:16:52 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:18:02.746 05:16:52 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:03.003 05:16:52 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:03.003 05:16:52 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:03.260 nvme0n1 00:18:03.260 05:16:52 -- host/digest.sh@91 -- # bperf_py perform_tests 00:18:03.260 05:16:52 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:03.260 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:03.260 Zero copy mechanism will not be used. 00:18:03.260 Running I/O for 2 seconds... 00:18:05.790 00:18:05.790 Latency(us) 00:18:05.790 [2024-12-08T05:16:55.576Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.790 [2024-12-08T05:16:55.576Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:18:05.790 nvme0n1 : 2.00 6485.34 810.67 0.00 0.00 2461.11 1794.79 11141.12 00:18:05.790 [2024-12-08T05:16:55.576Z] =================================================================================================================== 00:18:05.790 [2024-12-08T05:16:55.576Z] Total : 6485.34 810.67 0.00 0.00 2461.11 1794.79 11141.12 00:18:05.790 0 00:18:05.790 05:16:55 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:18:05.790 05:16:55 -- host/digest.sh@92 -- # get_accel_stats 00:18:05.790 05:16:55 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:05.790 05:16:55 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:05.790 05:16:55 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:05.790 | select(.opcode=="crc32c") 00:18:05.790 | "\(.module_name) \(.executed)"' 00:18:05.790 05:16:55 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:18:05.790 05:16:55 -- host/digest.sh@93 -- # exp_module=software 00:18:05.790 05:16:55 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:18:05.790 05:16:55 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:05.790 05:16:55 -- host/digest.sh@97 -- # killprocess 83976 00:18:05.790 05:16:55 -- common/autotest_common.sh@936 -- # '[' -z 83976 ']' 00:18:05.790 05:16:55 -- common/autotest_common.sh@940 -- # kill -0 83976 00:18:05.790 05:16:55 -- common/autotest_common.sh@941 -- # uname 00:18:05.790 05:16:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:05.790 05:16:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83976 00:18:05.790 05:16:55 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:05.790 05:16:55 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:05.790 killing process with pid 83976 00:18:05.790 05:16:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83976' 00:18:05.790 Received shutdown signal, test time was about 2.000000 seconds 00:18:05.790 00:18:05.790 Latency(us) 00:18:05.790 [2024-12-08T05:16:55.576Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.790 [2024-12-08T05:16:55.576Z] =================================================================================================================== 00:18:05.790 [2024-12-08T05:16:55.577Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:05.791 05:16:55 -- common/autotest_common.sh@955 -- # kill 83976 00:18:05.791 05:16:55 -- common/autotest_common.sh@960 -- # wait 83976 00:18:05.791 05:16:55 -- host/digest.sh@126 -- # killprocess 83791 00:18:05.791 05:16:55 -- common/autotest_common.sh@936 -- # '[' -z 83791 ']' 00:18:05.791 05:16:55 -- common/autotest_common.sh@940 -- # kill -0 83791 00:18:05.791 05:16:55 -- common/autotest_common.sh@941 -- # uname 00:18:05.791 05:16:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:05.791 05:16:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83791 00:18:05.791 05:16:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:05.791 05:16:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:05.791 killing process with pid 83791 00:18:05.791 05:16:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83791' 00:18:05.791 05:16:55 -- common/autotest_common.sh@955 -- # kill 83791 00:18:05.791 05:16:55 -- common/autotest_common.sh@960 -- # wait 83791 00:18:06.048 00:18:06.048 real 0m15.077s 00:18:06.048 user 0m29.673s 00:18:06.048 sys 0m4.368s 00:18:06.048 05:16:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:06.048 05:16:55 -- common/autotest_common.sh@10 -- # set +x 00:18:06.048 ************************************ 00:18:06.048 END TEST nvmf_digest_clean 00:18:06.048 ************************************ 00:18:06.048 05:16:55 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:18:06.048 05:16:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:06.048 05:16:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:06.048 05:16:55 -- common/autotest_common.sh@10 -- # set +x 00:18:06.048 ************************************ 00:18:06.048 START TEST nvmf_digest_error 00:18:06.048 ************************************ 00:18:06.048 05:16:55 -- common/autotest_common.sh@1114 -- # run_digest_error 00:18:06.048 05:16:55 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:18:06.048 05:16:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:06.048 05:16:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:06.048 05:16:55 -- common/autotest_common.sh@10 -- # set +x 00:18:06.048 05:16:55 -- nvmf/common.sh@469 -- # nvmfpid=84046 00:18:06.048 05:16:55 -- nvmf/common.sh@470 -- # waitforlisten 84046 00:18:06.048 05:16:55 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:18:06.048 05:16:55 -- common/autotest_common.sh@829 -- # '[' -z 84046 ']' 00:18:06.048 05:16:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:06.048 05:16:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:06.048 05:16:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:06.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:06.048 05:16:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:06.048 05:16:55 -- common/autotest_common.sh@10 -- # set +x 00:18:06.048 [2024-12-08 05:16:55.761500] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:06.048 [2024-12-08 05:16:55.761603] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:06.306 [2024-12-08 05:16:55.908880] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.306 [2024-12-08 05:16:55.949833] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:06.306 [2024-12-08 05:16:55.950066] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:06.306 [2024-12-08 05:16:55.950094] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:06.306 [2024-12-08 05:16:55.950109] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:06.306 [2024-12-08 05:16:55.950144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:06.306 05:16:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:06.306 05:16:56 -- common/autotest_common.sh@862 -- # return 0 00:18:06.306 05:16:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:06.306 05:16:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:06.306 05:16:56 -- common/autotest_common.sh@10 -- # set +x 00:18:06.306 05:16:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:06.306 05:16:56 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:18:06.306 05:16:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.306 05:16:56 -- common/autotest_common.sh@10 -- # set +x 00:18:06.564 [2024-12-08 05:16:56.094670] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:18:06.564 05:16:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.564 05:16:56 -- host/digest.sh@104 -- # common_target_config 00:18:06.564 05:16:56 -- host/digest.sh@43 -- # rpc_cmd 00:18:06.564 05:16:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.564 05:16:56 -- common/autotest_common.sh@10 -- # set +x 00:18:06.564 null0 00:18:06.564 [2024-12-08 05:16:56.168887] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:06.564 [2024-12-08 05:16:56.193086] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:06.564 05:16:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.564 05:16:56 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:18:06.564 05:16:56 -- host/digest.sh@54 -- # local rw bs qd 00:18:06.564 05:16:56 -- host/digest.sh@56 -- # rw=randread 00:18:06.564 05:16:56 -- host/digest.sh@56 -- # bs=4096 00:18:06.564 05:16:56 -- host/digest.sh@56 -- # qd=128 00:18:06.564 05:16:56 -- host/digest.sh@58 -- # bperfpid=84076 00:18:06.564 05:16:56 -- host/digest.sh@60 -- # waitforlisten 84076 /var/tmp/bperf.sock 00:18:06.564 05:16:56 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:18:06.564 05:16:56 -- common/autotest_common.sh@829 -- # '[' -z 84076 ']' 00:18:06.564 05:16:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:06.564 05:16:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:06.564 05:16:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:06.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:06.564 05:16:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:06.564 05:16:56 -- common/autotest_common.sh@10 -- # set +x 00:18:06.564 [2024-12-08 05:16:56.244904] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:06.564 [2024-12-08 05:16:56.244994] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84076 ] 00:18:06.822 [2024-12-08 05:16:56.382185] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.822 [2024-12-08 05:16:56.418604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:06.822 05:16:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:06.822 05:16:56 -- common/autotest_common.sh@862 -- # return 0 00:18:06.822 05:16:56 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:06.822 05:16:56 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:07.079 05:16:56 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:07.080 05:16:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.080 05:16:56 -- common/autotest_common.sh@10 -- # set +x 00:18:07.080 05:16:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.080 05:16:56 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:07.080 05:16:56 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:07.645 nvme0n1 00:18:07.645 05:16:57 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:18:07.645 05:16:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.645 05:16:57 -- common/autotest_common.sh@10 -- # set +x 00:18:07.645 05:16:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.645 05:16:57 -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:07.645 05:16:57 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:07.645 Running I/O for 2 seconds... 00:18:07.645 [2024-12-08 05:16:57.394900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:07.645 [2024-12-08 05:16:57.394963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.645 [2024-12-08 05:16:57.394980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.645 [2024-12-08 05:16:57.412269] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:07.645 [2024-12-08 05:16:57.412315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.645 [2024-12-08 05:16:57.412330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.903 [2024-12-08 05:16:57.429627] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:07.903 [2024-12-08 05:16:57.429830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.903 [2024-12-08 05:16:57.429849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.903 [2024-12-08 05:16:57.447262] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:07.903 [2024-12-08 05:16:57.447312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.903 [2024-12-08 05:16:57.447327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.903 [2024-12-08 05:16:57.465851] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:07.903 [2024-12-08 05:16:57.465926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.903 [2024-12-08 05:16:57.465943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.903 [2024-12-08 05:16:57.485705] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:07.903 [2024-12-08 05:16:57.485757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.903 [2024-12-08 05:16:57.485772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.903 [2024-12-08 05:16:57.505279] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:07.903 [2024-12-08 05:16:57.505470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.903 [2024-12-08 05:16:57.505490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.903 [2024-12-08 05:16:57.525026] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:07.903 [2024-12-08 05:16:57.525068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.903 [2024-12-08 05:16:57.525083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.903 [2024-12-08 05:16:57.544635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:07.903 [2024-12-08 05:16:57.544829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.903 [2024-12-08 05:16:57.544848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.903 [2024-12-08 05:16:57.562287] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:07.903 [2024-12-08 05:16:57.562469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.903 [2024-12-08 05:16:57.562639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.903 [2024-12-08 05:16:57.580095] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:07.903 [2024-12-08 05:16:57.580275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.903 [2024-12-08 05:16:57.580467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.903 [2024-12-08 05:16:57.597889] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:07.903 [2024-12-08 05:16:57.598070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.903 [2024-12-08 05:16:57.598213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.904 [2024-12-08 05:16:57.615658] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:07.904 [2024-12-08 05:16:57.615854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:2584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.904 [2024-12-08 05:16:57.615986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.904 [2024-12-08 05:16:57.633400] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:07.904 [2024-12-08 05:16:57.633578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.904 [2024-12-08 05:16:57.633740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.904 [2024-12-08 05:16:57.651406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:07.904 [2024-12-08 05:16:57.651589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.904 [2024-12-08 05:16:57.651745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.904 [2024-12-08 05:16:57.669064] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:07.904 [2024-12-08 05:16:57.669242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.904 [2024-12-08 05:16:57.669380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.904 [2024-12-08 05:16:57.686762] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:07.904 [2024-12-08 05:16:57.686949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:24292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.904 [2024-12-08 05:16:57.687091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.162 [2024-12-08 05:16:57.704731] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:08.162 [2024-12-08 05:16:57.704915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.162 [2024-12-08 05:16:57.705061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.162 [2024-12-08 05:16:57.722502] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:08.162 [2024-12-08 05:16:57.722693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.162 [2024-12-08 05:16:57.722917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.162 [2024-12-08 05:16:57.740289] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:08.162 [2024-12-08 05:16:57.740469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:14646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.162 [2024-12-08 05:16:57.740610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.162 [2024-12-08 05:16:57.757905] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:08.162 [2024-12-08 05:16:57.758062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:25281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.162 [2024-12-08 05:16:57.758081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.162 [2024-12-08 05:16:57.775366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:08.162 [2024-12-08 05:16:57.775417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.162 [2024-12-08 05:16:57.775431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.162 [2024-12-08 05:16:57.792683] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:08.162 [2024-12-08 05:16:57.792722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.162 [2024-12-08 05:16:57.792737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.162 [2024-12-08 05:16:57.809975] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:08.162 [2024-12-08 05:16:57.810014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.162 [2024-12-08 05:16:57.810028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.162 [2024-12-08 05:16:57.827259] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:08.162 [2024-12-08 05:16:57.827298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.162 [2024-12-08 05:16:57.827312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.162 [2024-12-08 05:16:57.844614] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:08.162 [2024-12-08 05:16:57.844657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.162 [2024-12-08 05:16:57.844687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.162 [2024-12-08 05:16:57.862000] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:08.162 [2024-12-08 05:16:57.862042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:11692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.162 [2024-12-08 05:16:57.862056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.162 [2024-12-08 05:16:57.879426] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:08.162 [2024-12-08 05:16:57.879470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.162 [2024-12-08 05:16:57.879484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.162 [2024-12-08 05:16:57.896844] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:08.162 [2024-12-08 05:16:57.897016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.162 [2024-12-08 05:16:57.897037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.162 [2024-12-08 05:16:57.914390] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:08.162 [2024-12-08 05:16:57.914434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.162 [2024-12-08 05:16:57.914448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.162 [2024-12-08 05:16:57.932322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:08.162 [2024-12-08 05:16:57.932543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.162 [2024-12-08 05:16:57.932573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.421 [2024-12-08 05:16:57.950666] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:08.421 [2024-12-08 05:16:57.950730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.421 [2024-12-08 05:16:57.950748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.421 [2024-12-08 05:16:57.968156] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:08.421 [2024-12-08 05:16:57.968201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:16564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.421 [2024-12-08 05:16:57.968216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.421 [2024-12-08 05:16:57.985490] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:08.421 [2024-12-08 05:16:57.985531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.421 [2024-12-08 05:16:57.985547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.421 [2024-12-08 05:16:58.002790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:08.421 [2024-12-08 05:16:58.002965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:11569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.421 [2024-12-08 05:16:58.002984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.421 [2024-12-08 05:16:58.020251] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:08.421 [2024-12-08 05:16:58.020293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:3792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.421 [2024-12-08 05:16:58.020309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.421 [2024-12-08 05:16:58.037667] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:08.421 [2024-12-08 05:16:58.037719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:13497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.421 [2024-12-08 05:16:58.037734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.421 [2024-12-08 05:16:58.055023] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:08.421 [2024-12-08 05:16:58.055063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:11761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.421 [2024-12-08 05:16:58.055078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.421 [2024-12-08 05:16:58.072312] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:08.421 [2024-12-08 05:16:58.072352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:9426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.421 [2024-12-08 05:16:58.072367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.421 [2024-12-08 05:16:58.089618] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:08.421 [2024-12-08 05:16:58.089803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.421 [2024-12-08 05:16:58.089823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.421 [2024-12-08 05:16:58.107098] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:08.421 [2024-12-08 05:16:58.107139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:11441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.421 [2024-12-08 05:16:58.107153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.421 [2024-12-08 05:16:58.124434] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:08.421 [2024-12-08 05:16:58.124474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.421 [2024-12-08 05:16:58.124488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.421 [2024-12-08 05:16:58.141753] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:08.421 [2024-12-08 05:16:58.141792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.421 [2024-12-08 05:16:58.141806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.421 [2024-12-08 05:16:58.159074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:08.421 [2024-12-08 05:16:58.159114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.421 [2024-12-08 05:16:58.159128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.421 [2024-12-08 05:16:58.176459] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:08.422 [2024-12-08 05:16:58.176626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.422 [2024-12-08 05:16:58.176645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.422 [2024-12-08 05:16:58.194035] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:08.422 [2024-12-08 05:16:58.194077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.422 [2024-12-08 05:16:58.194092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.681 [2024-12-08 05:16:58.211559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:08.681 [2024-12-08 05:16:58.211740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:24523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.681 [2024-12-08 05:16:58.211760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.681 [2024-12-08 05:16:58.229076] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:08.681 [2024-12-08 05:16:58.229117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.681 [2024-12-08 05:16:58.229131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.681 [2024-12-08 05:16:58.246521] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:08.681 [2024-12-08 05:16:58.246563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.681 [2024-12-08 05:16:58.246577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.681 [2024-12-08 05:16:58.263900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:08.681 [2024-12-08 05:16:58.263940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.681 [2024-12-08 05:16:58.263956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.681 [2024-12-08 05:16:58.281223] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:08.681 [2024-12-08 05:16:58.281263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.681 [2024-12-08 05:16:58.281277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.681 [2024-12-08 05:16:58.298506] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:08.681 [2024-12-08 05:16:58.298668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.681 [2024-12-08 05:16:58.298702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.681 [2024-12-08 05:16:58.316001] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:08.681 [2024-12-08 05:16:58.316041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.681 [2024-12-08 05:16:58.316056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.681 [2024-12-08 05:16:58.333324] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:08.681 [2024-12-08 05:16:58.333485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:8012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.681 [2024-12-08 05:16:58.333504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.681 [2024-12-08 05:16:58.350815] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:08.681 [2024-12-08 05:16:58.350856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.681 [2024-12-08 05:16:58.350870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.681 [2024-12-08 05:16:58.368177] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:08.681 [2024-12-08 05:16:58.368339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.681 [2024-12-08 05:16:58.368358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.681 [2024-12-08 05:16:58.385658] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:08.682 [2024-12-08 05:16:58.385717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.682 [2024-12-08 05:16:58.385732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.682 [2024-12-08 05:16:58.402988] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:08.682 [2024-12-08 05:16:58.403155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.682 [2024-12-08 05:16:58.403174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.682 [2024-12-08 05:16:58.420475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:08.682 [2024-12-08 05:16:58.420516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.682 [2024-12-08 05:16:58.420532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.682 [2024-12-08 05:16:58.437846] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:08.682 [2024-12-08 05:16:58.438008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:11204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.682 [2024-12-08 05:16:58.438027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.682 [2024-12-08 05:16:58.455543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:08.682 [2024-12-08 05:16:58.455585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.682 [2024-12-08 05:16:58.455599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.942 [2024-12-08 05:16:58.473153] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:08.942 [2024-12-08 05:16:58.473374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:10521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.942 [2024-12-08 05:16:58.473396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.942 [2024-12-08 05:16:58.490894] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:08.942 [2024-12-08 05:16:58.490938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:14506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.942 [2024-12-08 05:16:58.490953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.942 [2024-12-08 05:16:58.515845] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:08.942 [2024-12-08 05:16:58.516018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.942 [2024-12-08 05:16:58.516039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.942 [2024-12-08 05:16:58.533315] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:08.942 [2024-12-08 05:16:58.533357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:25164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.942 [2024-12-08 05:16:58.533372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.942 [2024-12-08 05:16:58.550994] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:08.942 [2024-12-08 05:16:58.551170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.942 [2024-12-08 05:16:58.551191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.942 [2024-12-08 05:16:58.570335] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:08.942 [2024-12-08 05:16:58.570509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:2353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.942 [2024-12-08 05:16:58.570530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.942 [2024-12-08 05:16:58.590097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:08.942 [2024-12-08 05:16:58.590142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.942 [2024-12-08 05:16:58.590157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.942 [2024-12-08 05:16:58.609597] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:08.942 [2024-12-08 05:16:58.609783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:3145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.942 [2024-12-08 05:16:58.609802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.942 [2024-12-08 05:16:58.629340] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:08.942 [2024-12-08 05:16:58.629386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:15607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.942 [2024-12-08 05:16:58.629401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.942 [2024-12-08 05:16:58.647781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:08.942 [2024-12-08 05:16:58.647948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:8392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.942 [2024-12-08 05:16:58.647969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.942 [2024-12-08 05:16:58.665237] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:08.942 [2024-12-08 05:16:58.665278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:9195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.942 [2024-12-08 05:16:58.665292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.942 [2024-12-08 05:16:58.682556] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:08.942 [2024-12-08 05:16:58.682599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.942 [2024-12-08 05:16:58.682614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.942 [2024-12-08 05:16:58.699876] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:08.942 [2024-12-08 05:16:58.699916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.942 [2024-12-08 05:16:58.699931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.942 [2024-12-08 05:16:58.717172] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:08.942 [2024-12-08 05:16:58.717215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.942 [2024-12-08 05:16:58.717229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.201 [2024-12-08 05:16:58.734471] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:09.201 [2024-12-08 05:16:58.734511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:19094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.201 [2024-12-08 05:16:58.734525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.201 [2024-12-08 05:16:58.751780] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:09.201 [2024-12-08 05:16:58.751819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:7378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.201 [2024-12-08 05:16:58.751833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.201 [2024-12-08 05:16:58.769026] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:09.201 [2024-12-08 05:16:58.769066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.201 [2024-12-08 05:16:58.769080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.201 [2024-12-08 05:16:58.786317] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:09.201 [2024-12-08 05:16:58.786492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:19583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.201 [2024-12-08 05:16:58.786511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.201 [2024-12-08 05:16:58.803817] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:09.201 [2024-12-08 05:16:58.803859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:17624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.201 [2024-12-08 05:16:58.803874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.201 [2024-12-08 05:16:58.821084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:09.201 [2024-12-08 05:16:58.821245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.201 [2024-12-08 05:16:58.821263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.202 [2024-12-08 05:16:58.838505] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:09.202 [2024-12-08 05:16:58.838547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:7552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.202 [2024-12-08 05:16:58.838561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.202 [2024-12-08 05:16:58.855825] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:09.202 [2024-12-08 05:16:58.855981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:7623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.202 [2024-12-08 05:16:58.856001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.202 [2024-12-08 05:16:58.873240] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:09.202 [2024-12-08 05:16:58.873281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.202 [2024-12-08 05:16:58.873296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.202 [2024-12-08 05:16:58.890535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:09.202 [2024-12-08 05:16:58.890578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.202 [2024-12-08 05:16:58.890593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.202 [2024-12-08 05:16:58.907868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:09.202 [2024-12-08 05:16:58.907907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:14548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.202 [2024-12-08 05:16:58.907921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.202 [2024-12-08 05:16:58.925224] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:09.202 [2024-12-08 05:16:58.925264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.202 [2024-12-08 05:16:58.925279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.202 [2024-12-08 05:16:58.942577] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:09.202 [2024-12-08 05:16:58.942756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.202 [2024-12-08 05:16:58.942774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.202 [2024-12-08 05:16:58.961338] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:09.202 [2024-12-08 05:16:58.961399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:18282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.202 [2024-12-08 05:16:58.961418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.202 [2024-12-08 05:16:58.978907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:09.202 [2024-12-08 05:16:58.978950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.202 [2024-12-08 05:16:58.978966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.459 [2024-12-08 05:16:58.996309] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:09.459 [2024-12-08 05:16:58.996352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.459 [2024-12-08 05:16:58.996367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.459 [2024-12-08 05:16:59.013630] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:09.459 [2024-12-08 05:16:59.013685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.459 [2024-12-08 05:16:59.013702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.459 [2024-12-08 05:16:59.030928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:09.459 [2024-12-08 05:16:59.030970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.459 [2024-12-08 05:16:59.030984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.459 [2024-12-08 05:16:59.048194] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:09.459 [2024-12-08 05:16:59.048234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.459 [2024-12-08 05:16:59.048249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.459 [2024-12-08 05:16:59.065441] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:09.459 [2024-12-08 05:16:59.065609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:44 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.459 [2024-12-08 05:16:59.065628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.459 [2024-12-08 05:16:59.082874] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:09.459 [2024-12-08 05:16:59.082915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.459 [2024-12-08 05:16:59.082930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.459 [2024-12-08 05:16:59.100150] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:09.459 [2024-12-08 05:16:59.100311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.459 [2024-12-08 05:16:59.100329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.459 [2024-12-08 05:16:59.117597] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:09.459 [2024-12-08 05:16:59.117639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.459 [2024-12-08 05:16:59.117654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.459 [2024-12-08 05:16:59.134866] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:09.459 [2024-12-08 05:16:59.135026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:25580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.459 [2024-12-08 05:16:59.135045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.459 [2024-12-08 05:16:59.152298] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:09.459 [2024-12-08 05:16:59.152338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.459 [2024-12-08 05:16:59.152353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.459 [2024-12-08 05:16:59.169579] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:09.459 [2024-12-08 05:16:59.169619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:17028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.459 [2024-12-08 05:16:59.169633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.459 [2024-12-08 05:16:59.186910] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:09.459 [2024-12-08 05:16:59.186951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.459 [2024-12-08 05:16:59.186967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.459 [2024-12-08 05:16:59.204243] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:09.459 [2024-12-08 05:16:59.204283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.459 [2024-12-08 05:16:59.204298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.459 [2024-12-08 05:16:59.221540] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:09.459 [2024-12-08 05:16:59.221581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.459 [2024-12-08 05:16:59.221595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.459 [2024-12-08 05:16:59.238817] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:09.459 [2024-12-08 05:16:59.238856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.459 [2024-12-08 05:16:59.238870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.716 [2024-12-08 05:16:59.256122] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:09.716 [2024-12-08 05:16:59.256284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.716 [2024-12-08 05:16:59.256304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.716 [2024-12-08 05:16:59.273550] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:09.716 [2024-12-08 05:16:59.273592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.716 [2024-12-08 05:16:59.273606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.716 [2024-12-08 05:16:59.290796] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:09.716 [2024-12-08 05:16:59.290955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.716 [2024-12-08 05:16:59.290973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.716 [2024-12-08 05:16:59.308224] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:09.716 [2024-12-08 05:16:59.308265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:10575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.717 [2024-12-08 05:16:59.308280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.717 [2024-12-08 05:16:59.325543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:09.717 [2024-12-08 05:16:59.325586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:7669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.717 [2024-12-08 05:16:59.325600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.717 [2024-12-08 05:16:59.342883] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:09.717 [2024-12-08 05:16:59.342924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.717 [2024-12-08 05:16:59.342939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.717 [2024-12-08 05:16:59.360151] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f60b0) 00:18:09.717 [2024-12-08 05:16:59.360190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:16187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.717 [2024-12-08 05:16:59.360204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.717 00:18:09.717 Latency(us) 00:18:09.717 [2024-12-08T05:16:59.503Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.717 [2024-12-08T05:16:59.503Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:18:09.717 nvme0n1 : 2.01 14309.43 55.90 0.00 0.00 8939.62 8102.63 33840.41 00:18:09.717 [2024-12-08T05:16:59.503Z] =================================================================================================================== 00:18:09.717 [2024-12-08T05:16:59.503Z] Total : 14309.43 55.90 0.00 0.00 8939.62 8102.63 33840.41 00:18:09.717 0 00:18:09.717 05:16:59 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:09.717 05:16:59 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:09.717 05:16:59 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:09.717 05:16:59 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:09.717 | .driver_specific 00:18:09.717 | .nvme_error 00:18:09.717 | .status_code 00:18:09.717 | .command_transient_transport_error' 00:18:09.974 05:16:59 -- host/digest.sh@71 -- # (( 112 > 0 )) 00:18:09.974 05:16:59 -- host/digest.sh@73 -- # killprocess 84076 00:18:09.974 05:16:59 -- common/autotest_common.sh@936 -- # '[' -z 84076 ']' 00:18:09.974 05:16:59 -- common/autotest_common.sh@940 -- # kill -0 84076 00:18:09.974 05:16:59 -- common/autotest_common.sh@941 -- # uname 00:18:09.974 05:16:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:09.974 05:16:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84076 00:18:09.974 killing process with pid 84076 00:18:09.974 Received shutdown signal, test time was about 2.000000 seconds 00:18:09.974 00:18:09.974 Latency(us) 00:18:09.974 [2024-12-08T05:16:59.760Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.974 [2024-12-08T05:16:59.760Z] =================================================================================================================== 00:18:09.974 [2024-12-08T05:16:59.760Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:09.974 05:16:59 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:09.974 05:16:59 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:09.974 05:16:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84076' 00:18:09.974 05:16:59 -- common/autotest_common.sh@955 -- # kill 84076 00:18:09.974 05:16:59 -- common/autotest_common.sh@960 -- # wait 84076 00:18:10.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:10.232 05:16:59 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:18:10.232 05:16:59 -- host/digest.sh@54 -- # local rw bs qd 00:18:10.232 05:16:59 -- host/digest.sh@56 -- # rw=randread 00:18:10.232 05:16:59 -- host/digest.sh@56 -- # bs=131072 00:18:10.232 05:16:59 -- host/digest.sh@56 -- # qd=16 00:18:10.232 05:16:59 -- host/digest.sh@58 -- # bperfpid=84123 00:18:10.232 05:16:59 -- host/digest.sh@60 -- # waitforlisten 84123 /var/tmp/bperf.sock 00:18:10.232 05:16:59 -- common/autotest_common.sh@829 -- # '[' -z 84123 ']' 00:18:10.232 05:16:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:10.232 05:16:59 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:18:10.232 05:16:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:10.232 05:16:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:10.232 05:16:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:10.232 05:16:59 -- common/autotest_common.sh@10 -- # set +x 00:18:10.232 [2024-12-08 05:16:59.920390] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:10.232 [2024-12-08 05:16:59.920747] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84123 ] 00:18:10.232 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:10.232 Zero copy mechanism will not be used. 00:18:10.489 [2024-12-08 05:17:00.068242] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.489 [2024-12-08 05:17:00.106406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:11.429 05:17:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:11.429 05:17:00 -- common/autotest_common.sh@862 -- # return 0 00:18:11.429 05:17:00 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:11.429 05:17:00 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:11.429 05:17:01 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:11.429 05:17:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.429 05:17:01 -- common/autotest_common.sh@10 -- # set +x 00:18:11.429 05:17:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.429 05:17:01 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:11.429 05:17:01 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:12.000 nvme0n1 00:18:12.000 05:17:01 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:18:12.000 05:17:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.000 05:17:01 -- common/autotest_common.sh@10 -- # set +x 00:18:12.000 05:17:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.000 05:17:01 -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:12.000 05:17:01 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:12.000 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:12.000 Zero copy mechanism will not be used. 00:18:12.000 Running I/O for 2 seconds... 00:18:12.000 [2024-12-08 05:17:01.661372] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.000 [2024-12-08 05:17:01.661430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.000 [2024-12-08 05:17:01.661447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.000 [2024-12-08 05:17:01.665818] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.000 [2024-12-08 05:17:01.665860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.000 [2024-12-08 05:17:01.665874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.000 [2024-12-08 05:17:01.670187] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.000 [2024-12-08 05:17:01.670229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.000 [2024-12-08 05:17:01.670243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.000 [2024-12-08 05:17:01.674513] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.000 [2024-12-08 05:17:01.674555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.000 [2024-12-08 05:17:01.674569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.000 [2024-12-08 05:17:01.678923] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.000 [2024-12-08 05:17:01.678964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.000 [2024-12-08 05:17:01.678978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.000 [2024-12-08 05:17:01.683248] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.000 [2024-12-08 05:17:01.683288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.000 [2024-12-08 05:17:01.683303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.000 [2024-12-08 05:17:01.687584] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.000 [2024-12-08 05:17:01.687625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.000 [2024-12-08 05:17:01.687639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.000 [2024-12-08 05:17:01.691993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.000 [2024-12-08 05:17:01.692033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.000 [2024-12-08 05:17:01.692047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.000 [2024-12-08 05:17:01.696393] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.000 [2024-12-08 05:17:01.696433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.000 [2024-12-08 05:17:01.696448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.000 [2024-12-08 05:17:01.700839] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.000 [2024-12-08 05:17:01.700879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.000 [2024-12-08 05:17:01.700894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.000 [2024-12-08 05:17:01.705229] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.000 [2024-12-08 05:17:01.705270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.000 [2024-12-08 05:17:01.705284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.000 [2024-12-08 05:17:01.709606] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.000 [2024-12-08 05:17:01.709796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.000 [2024-12-08 05:17:01.709815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.000 [2024-12-08 05:17:01.714177] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.000 [2024-12-08 05:17:01.714220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.001 [2024-12-08 05:17:01.714234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.001 [2024-12-08 05:17:01.718605] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.001 [2024-12-08 05:17:01.718646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.001 [2024-12-08 05:17:01.718660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.001 [2024-12-08 05:17:01.723050] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.001 [2024-12-08 05:17:01.723092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.001 [2024-12-08 05:17:01.723107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.001 [2024-12-08 05:17:01.727484] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.001 [2024-12-08 05:17:01.727526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.001 [2024-12-08 05:17:01.727541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.001 [2024-12-08 05:17:01.731932] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.001 [2024-12-08 05:17:01.731972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.001 [2024-12-08 05:17:01.731987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.001 [2024-12-08 05:17:01.736368] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.001 [2024-12-08 05:17:01.736409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.001 [2024-12-08 05:17:01.736424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.001 [2024-12-08 05:17:01.742025] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.001 [2024-12-08 05:17:01.742088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.001 [2024-12-08 05:17:01.742113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.001 [2024-12-08 05:17:01.747454] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.001 [2024-12-08 05:17:01.747501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.001 [2024-12-08 05:17:01.747524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.001 [2024-12-08 05:17:01.752009] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.001 [2024-12-08 05:17:01.752060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.001 [2024-12-08 05:17:01.752076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.001 [2024-12-08 05:17:01.756736] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.001 [2024-12-08 05:17:01.756778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.001 [2024-12-08 05:17:01.756792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.001 [2024-12-08 05:17:01.761190] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.001 [2024-12-08 05:17:01.761235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.001 [2024-12-08 05:17:01.761251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.001 [2024-12-08 05:17:01.765575] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.001 [2024-12-08 05:17:01.765617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.001 [2024-12-08 05:17:01.765631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.001 [2024-12-08 05:17:01.770059] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.001 [2024-12-08 05:17:01.770102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.001 [2024-12-08 05:17:01.770117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.001 [2024-12-08 05:17:01.774462] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.001 [2024-12-08 05:17:01.774505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.001 [2024-12-08 05:17:01.774520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.001 [2024-12-08 05:17:01.778948] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.001 [2024-12-08 05:17:01.778990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.001 [2024-12-08 05:17:01.779004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.001 [2024-12-08 05:17:01.783399] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.001 [2024-12-08 05:17:01.783439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.001 [2024-12-08 05:17:01.783453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.261 [2024-12-08 05:17:01.787774] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.261 [2024-12-08 05:17:01.787814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.261 [2024-12-08 05:17:01.787828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.261 [2024-12-08 05:17:01.792146] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.261 [2024-12-08 05:17:01.792183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.261 [2024-12-08 05:17:01.792197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.261 [2024-12-08 05:17:01.796634] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.261 [2024-12-08 05:17:01.796691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.261 [2024-12-08 05:17:01.796708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.261 [2024-12-08 05:17:01.801014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.261 [2024-12-08 05:17:01.801054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.261 [2024-12-08 05:17:01.801069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.261 [2024-12-08 05:17:01.805351] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.261 [2024-12-08 05:17:01.805391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.261 [2024-12-08 05:17:01.805405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.261 [2024-12-08 05:17:01.809729] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.261 [2024-12-08 05:17:01.809769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.261 [2024-12-08 05:17:01.809783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.261 [2024-12-08 05:17:01.814006] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.261 [2024-12-08 05:17:01.814045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.261 [2024-12-08 05:17:01.814059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.261 [2024-12-08 05:17:01.818381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.261 [2024-12-08 05:17:01.818421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.261 [2024-12-08 05:17:01.818436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.261 [2024-12-08 05:17:01.822767] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.261 [2024-12-08 05:17:01.822807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.261 [2024-12-08 05:17:01.822821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.261 [2024-12-08 05:17:01.828097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.261 [2024-12-08 05:17:01.828141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.261 [2024-12-08 05:17:01.828154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.261 [2024-12-08 05:17:01.832548] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.261 [2024-12-08 05:17:01.832589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.261 [2024-12-08 05:17:01.832604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.261 [2024-12-08 05:17:01.837259] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.261 [2024-12-08 05:17:01.837305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.261 [2024-12-08 05:17:01.837320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.261 [2024-12-08 05:17:01.842219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.261 [2024-12-08 05:17:01.842395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.261 [2024-12-08 05:17:01.842414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.261 [2024-12-08 05:17:01.846857] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.261 [2024-12-08 05:17:01.846899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.261 [2024-12-08 05:17:01.846914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.261 [2024-12-08 05:17:01.852078] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.261 [2024-12-08 05:17:01.852119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.261 [2024-12-08 05:17:01.852133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.261 [2024-12-08 05:17:01.857215] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.261 [2024-12-08 05:17:01.857371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.261 [2024-12-08 05:17:01.857391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.261 [2024-12-08 05:17:01.862477] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.261 [2024-12-08 05:17:01.862521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.261 [2024-12-08 05:17:01.862535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.261 [2024-12-08 05:17:01.867592] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.261 [2024-12-08 05:17:01.867635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.261 [2024-12-08 05:17:01.867649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.261 [2024-12-08 05:17:01.872686] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.261 [2024-12-08 05:17:01.872724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.261 [2024-12-08 05:17:01.872738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.261 [2024-12-08 05:17:01.877833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.261 [2024-12-08 05:17:01.877871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.261 [2024-12-08 05:17:01.877884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.261 [2024-12-08 05:17:01.882892] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.261 [2024-12-08 05:17:01.882929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.262 [2024-12-08 05:17:01.882943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.262 [2024-12-08 05:17:01.888147] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.262 [2024-12-08 05:17:01.888304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.262 [2024-12-08 05:17:01.888324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.262 [2024-12-08 05:17:01.893344] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.262 [2024-12-08 05:17:01.893524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.262 [2024-12-08 05:17:01.893650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.262 [2024-12-08 05:17:01.898869] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.262 [2024-12-08 05:17:01.899064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.262 [2024-12-08 05:17:01.899209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.262 [2024-12-08 05:17:01.904485] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.262 [2024-12-08 05:17:01.904660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.262 [2024-12-08 05:17:01.904813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.262 [2024-12-08 05:17:01.909969] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.262 [2024-12-08 05:17:01.910141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.262 [2024-12-08 05:17:01.910349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.262 [2024-12-08 05:17:01.915572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.262 [2024-12-08 05:17:01.915762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.262 [2024-12-08 05:17:01.915894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.262 [2024-12-08 05:17:01.921073] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.262 [2024-12-08 05:17:01.921244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.262 [2024-12-08 05:17:01.921419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.262 [2024-12-08 05:17:01.926697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.262 [2024-12-08 05:17:01.926869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.262 [2024-12-08 05:17:01.926995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.262 [2024-12-08 05:17:01.931512] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.262 [2024-12-08 05:17:01.931698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.262 [2024-12-08 05:17:01.931856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.262 [2024-12-08 05:17:01.936353] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.262 [2024-12-08 05:17:01.936527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.262 [2024-12-08 05:17:01.936655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.262 [2024-12-08 05:17:01.941179] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.262 [2024-12-08 05:17:01.941221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.262 [2024-12-08 05:17:01.941236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.262 [2024-12-08 05:17:01.945534] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.262 [2024-12-08 05:17:01.945575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.262 [2024-12-08 05:17:01.945590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.262 [2024-12-08 05:17:01.949977] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.262 [2024-12-08 05:17:01.950017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.262 [2024-12-08 05:17:01.950031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.262 [2024-12-08 05:17:01.954301] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.262 [2024-12-08 05:17:01.954341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.262 [2024-12-08 05:17:01.954356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.262 [2024-12-08 05:17:01.958706] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.262 [2024-12-08 05:17:01.958745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.262 [2024-12-08 05:17:01.958759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.262 [2024-12-08 05:17:01.963071] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.262 [2024-12-08 05:17:01.963111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.262 [2024-12-08 05:17:01.963125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.262 [2024-12-08 05:17:01.967445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.262 [2024-12-08 05:17:01.967485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.262 [2024-12-08 05:17:01.967499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.262 [2024-12-08 05:17:01.971865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.262 [2024-12-08 05:17:01.971905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.262 [2024-12-08 05:17:01.971919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.262 [2024-12-08 05:17:01.976235] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.262 [2024-12-08 05:17:01.976276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.262 [2024-12-08 05:17:01.976291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.262 [2024-12-08 05:17:01.980655] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.262 [2024-12-08 05:17:01.980713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.262 [2024-12-08 05:17:01.980728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.262 [2024-12-08 05:17:01.985008] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.262 [2024-12-08 05:17:01.985048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.262 [2024-12-08 05:17:01.985062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.262 [2024-12-08 05:17:01.989334] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.262 [2024-12-08 05:17:01.989375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.262 [2024-12-08 05:17:01.989389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.262 [2024-12-08 05:17:01.993791] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.262 [2024-12-08 05:17:01.993831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.262 [2024-12-08 05:17:01.993847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.262 [2024-12-08 05:17:01.998168] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.262 [2024-12-08 05:17:01.998209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.263 [2024-12-08 05:17:01.998223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.263 [2024-12-08 05:17:02.002595] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.263 [2024-12-08 05:17:02.002635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.263 [2024-12-08 05:17:02.002650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.263 [2024-12-08 05:17:02.007044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.263 [2024-12-08 05:17:02.007204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.263 [2024-12-08 05:17:02.007223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.263 [2024-12-08 05:17:02.011556] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.263 [2024-12-08 05:17:02.011598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.263 [2024-12-08 05:17:02.011613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.263 [2024-12-08 05:17:02.015966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.263 [2024-12-08 05:17:02.016010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.263 [2024-12-08 05:17:02.016025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.263 [2024-12-08 05:17:02.020412] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.263 [2024-12-08 05:17:02.020453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.263 [2024-12-08 05:17:02.020468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.263 [2024-12-08 05:17:02.024818] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.263 [2024-12-08 05:17:02.024858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.263 [2024-12-08 05:17:02.024872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.263 [2024-12-08 05:17:02.029239] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.263 [2024-12-08 05:17:02.029279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.263 [2024-12-08 05:17:02.029294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.263 [2024-12-08 05:17:02.033812] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.263 [2024-12-08 05:17:02.033854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.263 [2024-12-08 05:17:02.033868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.263 [2024-12-08 05:17:02.038274] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.263 [2024-12-08 05:17:02.038317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.263 [2024-12-08 05:17:02.038331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.263 [2024-12-08 05:17:02.042718] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.263 [2024-12-08 05:17:02.042759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.263 [2024-12-08 05:17:02.042773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.523 [2024-12-08 05:17:02.047050] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.523 [2024-12-08 05:17:02.047091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.523 [2024-12-08 05:17:02.047106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.523 [2024-12-08 05:17:02.051490] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.523 [2024-12-08 05:17:02.051530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.523 [2024-12-08 05:17:02.051545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.523 [2024-12-08 05:17:02.055812] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.523 [2024-12-08 05:17:02.055850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.523 [2024-12-08 05:17:02.055864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.523 [2024-12-08 05:17:02.060242] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.523 [2024-12-08 05:17:02.060282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.523 [2024-12-08 05:17:02.060297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.523 [2024-12-08 05:17:02.064748] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.523 [2024-12-08 05:17:02.064788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.523 [2024-12-08 05:17:02.064802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.523 [2024-12-08 05:17:02.069127] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.523 [2024-12-08 05:17:02.069167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.523 [2024-12-08 05:17:02.069181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.523 [2024-12-08 05:17:02.073556] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.523 [2024-12-08 05:17:02.073596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.523 [2024-12-08 05:17:02.073610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.523 [2024-12-08 05:17:02.077922] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.523 [2024-12-08 05:17:02.077964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.523 [2024-12-08 05:17:02.077977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.523 [2024-12-08 05:17:02.082308] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.523 [2024-12-08 05:17:02.082348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.523 [2024-12-08 05:17:02.082362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.523 [2024-12-08 05:17:02.086797] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.523 [2024-12-08 05:17:02.086837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.523 [2024-12-08 05:17:02.086851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.523 [2024-12-08 05:17:02.091259] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.523 [2024-12-08 05:17:02.091300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.523 [2024-12-08 05:17:02.091314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.523 [2024-12-08 05:17:02.095692] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.523 [2024-12-08 05:17:02.095731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.523 [2024-12-08 05:17:02.095745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.523 [2024-12-08 05:17:02.100127] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.523 [2024-12-08 05:17:02.100168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.523 [2024-12-08 05:17:02.100182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.523 [2024-12-08 05:17:02.104503] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.523 [2024-12-08 05:17:02.104542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.523 [2024-12-08 05:17:02.104556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.523 [2024-12-08 05:17:02.108898] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.523 [2024-12-08 05:17:02.108939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.523 [2024-12-08 05:17:02.108953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.523 [2024-12-08 05:17:02.113202] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.523 [2024-12-08 05:17:02.113242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.523 [2024-12-08 05:17:02.113256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.523 [2024-12-08 05:17:02.117616] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.523 [2024-12-08 05:17:02.117657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.523 [2024-12-08 05:17:02.117688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.523 [2024-12-08 05:17:02.122022] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.523 [2024-12-08 05:17:02.122060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.523 [2024-12-08 05:17:02.122074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.523 [2024-12-08 05:17:02.126359] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.523 [2024-12-08 05:17:02.126399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.523 [2024-12-08 05:17:02.126413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.523 [2024-12-08 05:17:02.130732] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.523 [2024-12-08 05:17:02.130771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.523 [2024-12-08 05:17:02.130786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.523 [2024-12-08 05:17:02.135095] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.523 [2024-12-08 05:17:02.135135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.523 [2024-12-08 05:17:02.135148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.523 [2024-12-08 05:17:02.139492] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.523 [2024-12-08 05:17:02.139532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.523 [2024-12-08 05:17:02.139546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.523 [2024-12-08 05:17:02.143822] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.523 [2024-12-08 05:17:02.143861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.523 [2024-12-08 05:17:02.143875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.523 [2024-12-08 05:17:02.148195] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.523 [2024-12-08 05:17:02.148235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.523 [2024-12-08 05:17:02.148249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.523 [2024-12-08 05:17:02.152589] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.523 [2024-12-08 05:17:02.152629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.523 [2024-12-08 05:17:02.152643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.523 [2024-12-08 05:17:02.156883] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.523 [2024-12-08 05:17:02.156923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.523 [2024-12-08 05:17:02.156937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.523 [2024-12-08 05:17:02.161186] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.523 [2024-12-08 05:17:02.161226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.523 [2024-12-08 05:17:02.161239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.523 [2024-12-08 05:17:02.165558] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.523 [2024-12-08 05:17:02.165599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.523 [2024-12-08 05:17:02.165613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.523 [2024-12-08 05:17:02.169966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.523 [2024-12-08 05:17:02.170005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.523 [2024-12-08 05:17:02.170020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.523 [2024-12-08 05:17:02.174788] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.523 [2024-12-08 05:17:02.174831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.523 [2024-12-08 05:17:02.174846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.523 [2024-12-08 05:17:02.179280] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.523 [2024-12-08 05:17:02.179324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.523 [2024-12-08 05:17:02.179339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.523 [2024-12-08 05:17:02.183756] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.523 [2024-12-08 05:17:02.183796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.523 [2024-12-08 05:17:02.183811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.523 [2024-12-08 05:17:02.188243] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.523 [2024-12-08 05:17:02.188288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.524 [2024-12-08 05:17:02.188303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.524 [2024-12-08 05:17:02.192637] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.524 [2024-12-08 05:17:02.192693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.524 [2024-12-08 05:17:02.192710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.524 [2024-12-08 05:17:02.196934] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.524 [2024-12-08 05:17:02.196974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.524 [2024-12-08 05:17:02.196989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.524 [2024-12-08 05:17:02.201316] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.524 [2024-12-08 05:17:02.201356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.524 [2024-12-08 05:17:02.201371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.524 [2024-12-08 05:17:02.205759] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.524 [2024-12-08 05:17:02.205799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.524 [2024-12-08 05:17:02.205813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.524 [2024-12-08 05:17:02.210144] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.524 [2024-12-08 05:17:02.210185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.524 [2024-12-08 05:17:02.210198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.524 [2024-12-08 05:17:02.214456] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.524 [2024-12-08 05:17:02.214496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.524 [2024-12-08 05:17:02.214510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.524 [2024-12-08 05:17:02.218789] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.524 [2024-12-08 05:17:02.218828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.524 [2024-12-08 05:17:02.218842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.524 [2024-12-08 05:17:02.223034] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.524 [2024-12-08 05:17:02.223073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.524 [2024-12-08 05:17:02.223087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.524 [2024-12-08 05:17:02.227415] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.524 [2024-12-08 05:17:02.227454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.524 [2024-12-08 05:17:02.227469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.524 [2024-12-08 05:17:02.231823] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.524 [2024-12-08 05:17:02.231861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.524 [2024-12-08 05:17:02.231875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.524 [2024-12-08 05:17:02.236167] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.524 [2024-12-08 05:17:02.236207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.524 [2024-12-08 05:17:02.236222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.524 [2024-12-08 05:17:02.240556] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.524 [2024-12-08 05:17:02.240596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.524 [2024-12-08 05:17:02.240610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.524 [2024-12-08 05:17:02.244900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.524 [2024-12-08 05:17:02.244940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.524 [2024-12-08 05:17:02.244955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.524 [2024-12-08 05:17:02.249319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.524 [2024-12-08 05:17:02.249485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.524 [2024-12-08 05:17:02.249504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.524 [2024-12-08 05:17:02.253968] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.524 [2024-12-08 05:17:02.254008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.524 [2024-12-08 05:17:02.254023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.524 [2024-12-08 05:17:02.258395] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.524 [2024-12-08 05:17:02.258435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.524 [2024-12-08 05:17:02.258449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.524 [2024-12-08 05:17:02.262792] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.524 [2024-12-08 05:17:02.262832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.524 [2024-12-08 05:17:02.262846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.524 [2024-12-08 05:17:02.267104] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.524 [2024-12-08 05:17:02.267143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.524 [2024-12-08 05:17:02.267157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.524 [2024-12-08 05:17:02.271488] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.524 [2024-12-08 05:17:02.271528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.524 [2024-12-08 05:17:02.271542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.524 [2024-12-08 05:17:02.275795] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.524 [2024-12-08 05:17:02.275833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.524 [2024-12-08 05:17:02.275847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.524 [2024-12-08 05:17:02.280147] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.524 [2024-12-08 05:17:02.280187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.524 [2024-12-08 05:17:02.280201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.524 [2024-12-08 05:17:02.284511] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.524 [2024-12-08 05:17:02.284552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.524 [2024-12-08 05:17:02.284566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.524 [2024-12-08 05:17:02.288897] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.524 [2024-12-08 05:17:02.288936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.524 [2024-12-08 05:17:02.288950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.524 [2024-12-08 05:17:02.293300] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.524 [2024-12-08 05:17:02.293340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.524 [2024-12-08 05:17:02.293355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.524 [2024-12-08 05:17:02.297719] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.524 [2024-12-08 05:17:02.297757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.524 [2024-12-08 05:17:02.297771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.524 [2024-12-08 05:17:02.302097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.524 [2024-12-08 05:17:02.302136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.524 [2024-12-08 05:17:02.302150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.784 [2024-12-08 05:17:02.306507] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.784 [2024-12-08 05:17:02.306547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.784 [2024-12-08 05:17:02.306561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.784 [2024-12-08 05:17:02.310984] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.784 [2024-12-08 05:17:02.311025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.784 [2024-12-08 05:17:02.311039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.784 [2024-12-08 05:17:02.315409] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.784 [2024-12-08 05:17:02.315450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.784 [2024-12-08 05:17:02.315465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.784 [2024-12-08 05:17:02.319766] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.784 [2024-12-08 05:17:02.319805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.784 [2024-12-08 05:17:02.319819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.784 [2024-12-08 05:17:02.324145] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.784 [2024-12-08 05:17:02.324186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.784 [2024-12-08 05:17:02.324201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.784 [2024-12-08 05:17:02.328507] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.784 [2024-12-08 05:17:02.328547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.784 [2024-12-08 05:17:02.328561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.784 [2024-12-08 05:17:02.332920] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.784 [2024-12-08 05:17:02.332961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.784 [2024-12-08 05:17:02.332975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.784 [2024-12-08 05:17:02.337327] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.784 [2024-12-08 05:17:02.337366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.784 [2024-12-08 05:17:02.337380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.784 [2024-12-08 05:17:02.341724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.784 [2024-12-08 05:17:02.341764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.784 [2024-12-08 05:17:02.341779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.784 [2024-12-08 05:17:02.346097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.784 [2024-12-08 05:17:02.346139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.784 [2024-12-08 05:17:02.346154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.784 [2024-12-08 05:17:02.350429] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.784 [2024-12-08 05:17:02.350471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.784 [2024-12-08 05:17:02.350486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.784 [2024-12-08 05:17:02.354918] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.784 [2024-12-08 05:17:02.354960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.784 [2024-12-08 05:17:02.354975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.784 [2024-12-08 05:17:02.359357] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.784 [2024-12-08 05:17:02.359410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.784 [2024-12-08 05:17:02.359425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.784 [2024-12-08 05:17:02.363811] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.784 [2024-12-08 05:17:02.363852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.784 [2024-12-08 05:17:02.363867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.784 [2024-12-08 05:17:02.368200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.785 [2024-12-08 05:17:02.368241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.785 [2024-12-08 05:17:02.368257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.785 [2024-12-08 05:17:02.372602] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.785 [2024-12-08 05:17:02.372643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.785 [2024-12-08 05:17:02.372657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.785 [2024-12-08 05:17:02.376993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.785 [2024-12-08 05:17:02.377033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.785 [2024-12-08 05:17:02.377047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.785 [2024-12-08 05:17:02.381358] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.785 [2024-12-08 05:17:02.381399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.785 [2024-12-08 05:17:02.381413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.785 [2024-12-08 05:17:02.385712] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.785 [2024-12-08 05:17:02.385753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.785 [2024-12-08 05:17:02.385768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.785 [2024-12-08 05:17:02.390146] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.785 [2024-12-08 05:17:02.390187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.785 [2024-12-08 05:17:02.390201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.785 [2024-12-08 05:17:02.394504] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.785 [2024-12-08 05:17:02.394545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.785 [2024-12-08 05:17:02.394559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.785 [2024-12-08 05:17:02.398890] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.785 [2024-12-08 05:17:02.398930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.785 [2024-12-08 05:17:02.398945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.785 [2024-12-08 05:17:02.403272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.785 [2024-12-08 05:17:02.403312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.785 [2024-12-08 05:17:02.403326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.785 [2024-12-08 05:17:02.407498] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.785 [2024-12-08 05:17:02.407538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.785 [2024-12-08 05:17:02.407552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.785 [2024-12-08 05:17:02.412034] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.785 [2024-12-08 05:17:02.412077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.785 [2024-12-08 05:17:02.412093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.785 [2024-12-08 05:17:02.416557] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.785 [2024-12-08 05:17:02.416600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.785 [2024-12-08 05:17:02.416615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.785 [2024-12-08 05:17:02.420969] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.785 [2024-12-08 05:17:02.421011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.785 [2024-12-08 05:17:02.421025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.785 [2024-12-08 05:17:02.425425] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.785 [2024-12-08 05:17:02.425466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.785 [2024-12-08 05:17:02.425480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.785 [2024-12-08 05:17:02.429871] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.785 [2024-12-08 05:17:02.429910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.785 [2024-12-08 05:17:02.429924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.785 [2024-12-08 05:17:02.434215] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.785 [2024-12-08 05:17:02.434255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.785 [2024-12-08 05:17:02.434269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.785 [2024-12-08 05:17:02.439096] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.785 [2024-12-08 05:17:02.439143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.785 [2024-12-08 05:17:02.439160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.785 [2024-12-08 05:17:02.443543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.785 [2024-12-08 05:17:02.443585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.785 [2024-12-08 05:17:02.443600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.785 [2024-12-08 05:17:02.447969] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.785 [2024-12-08 05:17:02.448131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.785 [2024-12-08 05:17:02.448151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.785 [2024-12-08 05:17:02.452563] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.785 [2024-12-08 05:17:02.452607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.785 [2024-12-08 05:17:02.452622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.785 [2024-12-08 05:17:02.457035] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.785 [2024-12-08 05:17:02.457079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.785 [2024-12-08 05:17:02.457095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.786 [2024-12-08 05:17:02.462038] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.786 [2024-12-08 05:17:02.462100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.786 [2024-12-08 05:17:02.462124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.786 [2024-12-08 05:17:02.467852] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.786 [2024-12-08 05:17:02.467912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.786 [2024-12-08 05:17:02.467935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.786 [2024-12-08 05:17:02.473540] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.786 [2024-12-08 05:17:02.473602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.786 [2024-12-08 05:17:02.473628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.786 [2024-12-08 05:17:02.479359] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.786 [2024-12-08 05:17:02.479608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.786 [2024-12-08 05:17:02.479637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.786 [2024-12-08 05:17:02.485287] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.786 [2024-12-08 05:17:02.485350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.786 [2024-12-08 05:17:02.485376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.786 [2024-12-08 05:17:02.491084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.786 [2024-12-08 05:17:02.491149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.786 [2024-12-08 05:17:02.491176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.786 [2024-12-08 05:17:02.496750] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.786 [2024-12-08 05:17:02.496810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.786 [2024-12-08 05:17:02.496836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.786 [2024-12-08 05:17:02.502635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.786 [2024-12-08 05:17:02.502719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.786 [2024-12-08 05:17:02.502745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.786 [2024-12-08 05:17:02.508426] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.786 [2024-12-08 05:17:02.508486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.786 [2024-12-08 05:17:02.508511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.786 [2024-12-08 05:17:02.514286] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.786 [2024-12-08 05:17:02.514347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.786 [2024-12-08 05:17:02.514371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.786 [2024-12-08 05:17:02.520229] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.786 [2024-12-08 05:17:02.520473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.786 [2024-12-08 05:17:02.520648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.786 [2024-12-08 05:17:02.526530] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.786 [2024-12-08 05:17:02.526591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.786 [2024-12-08 05:17:02.526615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.786 [2024-12-08 05:17:02.532462] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.786 [2024-12-08 05:17:02.532523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.786 [2024-12-08 05:17:02.532547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.786 [2024-12-08 05:17:02.537592] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.786 [2024-12-08 05:17:02.537638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.786 [2024-12-08 05:17:02.537654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.786 [2024-12-08 05:17:02.542052] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.786 [2024-12-08 05:17:02.542093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.786 [2024-12-08 05:17:02.542107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.786 [2024-12-08 05:17:02.546505] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.786 [2024-12-08 05:17:02.546547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.786 [2024-12-08 05:17:02.546561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.786 [2024-12-08 05:17:02.550914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.786 [2024-12-08 05:17:02.550955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.786 [2024-12-08 05:17:02.550970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.786 [2024-12-08 05:17:02.555339] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.786 [2024-12-08 05:17:02.555386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.786 [2024-12-08 05:17:02.555402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.786 [2024-12-08 05:17:02.559711] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.786 [2024-12-08 05:17:02.559750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.786 [2024-12-08 05:17:02.559764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.786 [2024-12-08 05:17:02.564054] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:12.786 [2024-12-08 05:17:02.564094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.786 [2024-12-08 05:17:02.564109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.046 [2024-12-08 05:17:02.568443] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.046 [2024-12-08 05:17:02.568483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.046 [2024-12-08 05:17:02.568497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.046 [2024-12-08 05:17:02.572824] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.046 [2024-12-08 05:17:02.572865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.046 [2024-12-08 05:17:02.572880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.046 [2024-12-08 05:17:02.577191] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.046 [2024-12-08 05:17:02.577231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.046 [2024-12-08 05:17:02.577246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.046 [2024-12-08 05:17:02.581694] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.046 [2024-12-08 05:17:02.581733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.046 [2024-12-08 05:17:02.581747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.046 [2024-12-08 05:17:02.586110] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.046 [2024-12-08 05:17:02.586150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.046 [2024-12-08 05:17:02.586163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.046 [2024-12-08 05:17:02.590443] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.046 [2024-12-08 05:17:02.590484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.046 [2024-12-08 05:17:02.590498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.046 [2024-12-08 05:17:02.594873] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.046 [2024-12-08 05:17:02.594913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.046 [2024-12-08 05:17:02.594926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.046 [2024-12-08 05:17:02.599278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.046 [2024-12-08 05:17:02.599317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.046 [2024-12-08 05:17:02.599331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.046 [2024-12-08 05:17:02.603698] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.046 [2024-12-08 05:17:02.603737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.046 [2024-12-08 05:17:02.603752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.046 [2024-12-08 05:17:02.608016] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.047 [2024-12-08 05:17:02.608056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.047 [2024-12-08 05:17:02.608070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.047 [2024-12-08 05:17:02.612353] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.047 [2024-12-08 05:17:02.612393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.047 [2024-12-08 05:17:02.612407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.047 [2024-12-08 05:17:02.616665] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.047 [2024-12-08 05:17:02.616717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.047 [2024-12-08 05:17:02.616732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.047 [2024-12-08 05:17:02.621036] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.047 [2024-12-08 05:17:02.621076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.047 [2024-12-08 05:17:02.621090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.047 [2024-12-08 05:17:02.625407] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.047 [2024-12-08 05:17:02.625453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.047 [2024-12-08 05:17:02.625467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.047 [2024-12-08 05:17:02.629784] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.047 [2024-12-08 05:17:02.629824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.047 [2024-12-08 05:17:02.629838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.047 [2024-12-08 05:17:02.634199] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.047 [2024-12-08 05:17:02.634240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.047 [2024-12-08 05:17:02.634254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.047 [2024-12-08 05:17:02.638557] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.047 [2024-12-08 05:17:02.638598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.047 [2024-12-08 05:17:02.638612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.047 [2024-12-08 05:17:02.642991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.047 [2024-12-08 05:17:02.643032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.047 [2024-12-08 05:17:02.643046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.047 [2024-12-08 05:17:02.647364] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.047 [2024-12-08 05:17:02.647412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.047 [2024-12-08 05:17:02.647425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.047 [2024-12-08 05:17:02.651855] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.047 [2024-12-08 05:17:02.651894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.047 [2024-12-08 05:17:02.651908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.047 [2024-12-08 05:17:02.656226] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.047 [2024-12-08 05:17:02.656266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.047 [2024-12-08 05:17:02.656280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.047 [2024-12-08 05:17:02.660622] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.047 [2024-12-08 05:17:02.660662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.047 [2024-12-08 05:17:02.660700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.047 [2024-12-08 05:17:02.665023] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.047 [2024-12-08 05:17:02.665064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.047 [2024-12-08 05:17:02.665078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.047 [2024-12-08 05:17:02.669414] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.047 [2024-12-08 05:17:02.669454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.047 [2024-12-08 05:17:02.669468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.047 [2024-12-08 05:17:02.673792] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.047 [2024-12-08 05:17:02.673831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.047 [2024-12-08 05:17:02.673845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.047 [2024-12-08 05:17:02.678195] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.047 [2024-12-08 05:17:02.678235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.047 [2024-12-08 05:17:02.678249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.047 [2024-12-08 05:17:02.682596] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.047 [2024-12-08 05:17:02.682637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.047 [2024-12-08 05:17:02.682651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.047 [2024-12-08 05:17:02.686950] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.047 [2024-12-08 05:17:02.686990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.047 [2024-12-08 05:17:02.687004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.047 [2024-12-08 05:17:02.691297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.047 [2024-12-08 05:17:02.691470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.047 [2024-12-08 05:17:02.691489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.047 [2024-12-08 05:17:02.695888] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.047 [2024-12-08 05:17:02.695930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.047 [2024-12-08 05:17:02.695944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.048 [2024-12-08 05:17:02.700258] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.048 [2024-12-08 05:17:02.700299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.048 [2024-12-08 05:17:02.700314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.048 [2024-12-08 05:17:02.704714] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.048 [2024-12-08 05:17:02.704754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.048 [2024-12-08 05:17:02.704768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.048 [2024-12-08 05:17:02.709130] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.048 [2024-12-08 05:17:02.709170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.048 [2024-12-08 05:17:02.709185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.048 [2024-12-08 05:17:02.713449] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.048 [2024-12-08 05:17:02.713489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.048 [2024-12-08 05:17:02.713503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.048 [2024-12-08 05:17:02.717845] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.048 [2024-12-08 05:17:02.717886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.048 [2024-12-08 05:17:02.717900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.048 [2024-12-08 05:17:02.722190] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.048 [2024-12-08 05:17:02.722231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.048 [2024-12-08 05:17:02.722245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.048 [2024-12-08 05:17:02.726560] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.048 [2024-12-08 05:17:02.726600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.048 [2024-12-08 05:17:02.726615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.048 [2024-12-08 05:17:02.730993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.048 [2024-12-08 05:17:02.731050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.048 [2024-12-08 05:17:02.731065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.048 [2024-12-08 05:17:02.735398] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.048 [2024-12-08 05:17:02.735439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.048 [2024-12-08 05:17:02.735453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.048 [2024-12-08 05:17:02.739824] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.048 [2024-12-08 05:17:02.739862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.048 [2024-12-08 05:17:02.739877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.048 [2024-12-08 05:17:02.744226] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.048 [2024-12-08 05:17:02.744267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.048 [2024-12-08 05:17:02.744281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.048 [2024-12-08 05:17:02.748598] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.048 [2024-12-08 05:17:02.748639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.048 [2024-12-08 05:17:02.748653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.048 [2024-12-08 05:17:02.753049] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.048 [2024-12-08 05:17:02.753210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.048 [2024-12-08 05:17:02.753228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.048 [2024-12-08 05:17:02.757595] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.048 [2024-12-08 05:17:02.757637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.048 [2024-12-08 05:17:02.757652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.048 [2024-12-08 05:17:02.762020] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.048 [2024-12-08 05:17:02.762060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.048 [2024-12-08 05:17:02.762074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.048 [2024-12-08 05:17:02.766345] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.048 [2024-12-08 05:17:02.766388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.048 [2024-12-08 05:17:02.766402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.048 [2024-12-08 05:17:02.770748] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.048 [2024-12-08 05:17:02.770788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.048 [2024-12-08 05:17:02.770802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.048 [2024-12-08 05:17:02.775131] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.048 [2024-12-08 05:17:02.775171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.048 [2024-12-08 05:17:02.775184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.048 [2024-12-08 05:17:02.779617] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.048 [2024-12-08 05:17:02.779658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.048 [2024-12-08 05:17:02.779688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.048 [2024-12-08 05:17:02.784091] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.048 [2024-12-08 05:17:02.784260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.048 [2024-12-08 05:17:02.784279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.048 [2024-12-08 05:17:02.788942] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.048 [2024-12-08 05:17:02.789136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.048 [2024-12-08 05:17:02.789318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.048 [2024-12-08 05:17:02.793905] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.048 [2024-12-08 05:17:02.794098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.049 [2024-12-08 05:17:02.794277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.049 [2024-12-08 05:17:02.798973] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.049 [2024-12-08 05:17:02.799156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.049 [2024-12-08 05:17:02.799291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.049 [2024-12-08 05:17:02.803819] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.049 [2024-12-08 05:17:02.804002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.049 [2024-12-08 05:17:02.804170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.049 [2024-12-08 05:17:02.808650] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.049 [2024-12-08 05:17:02.808845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.049 [2024-12-08 05:17:02.809102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.049 [2024-12-08 05:17:02.813763] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.049 [2024-12-08 05:17:02.813939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.049 [2024-12-08 05:17:02.814099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.049 [2024-12-08 05:17:02.818756] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.049 [2024-12-08 05:17:02.818939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.049 [2024-12-08 05:17:02.819070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.049 [2024-12-08 05:17:02.823631] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.049 [2024-12-08 05:17:02.823830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.049 [2024-12-08 05:17:02.823965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.309 [2024-12-08 05:17:02.828505] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.309 [2024-12-08 05:17:02.828657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.309 [2024-12-08 05:17:02.828688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.309 [2024-12-08 05:17:02.833262] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.309 [2024-12-08 05:17:02.833438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.309 [2024-12-08 05:17:02.833590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.309 [2024-12-08 05:17:02.838201] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.309 [2024-12-08 05:17:02.838378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.309 [2024-12-08 05:17:02.838562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.309 [2024-12-08 05:17:02.843174] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.309 [2024-12-08 05:17:02.843352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.309 [2024-12-08 05:17:02.843555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.309 [2024-12-08 05:17:02.848129] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.309 [2024-12-08 05:17:02.848309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.309 [2024-12-08 05:17:02.848460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.309 [2024-12-08 05:17:02.853420] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.309 [2024-12-08 05:17:02.853623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.309 [2024-12-08 05:17:02.853647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.309 [2024-12-08 05:17:02.858316] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.309 [2024-12-08 05:17:02.858494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.309 [2024-12-08 05:17:02.858647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.309 [2024-12-08 05:17:02.863296] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.309 [2024-12-08 05:17:02.863478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.309 [2024-12-08 05:17:02.863686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.309 [2024-12-08 05:17:02.868164] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.309 [2024-12-08 05:17:02.868313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.309 [2024-12-08 05:17:02.868331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.309 [2024-12-08 05:17:02.872729] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.309 [2024-12-08 05:17:02.872768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.309 [2024-12-08 05:17:02.872782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.309 [2024-12-08 05:17:02.877087] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.309 [2024-12-08 05:17:02.877128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.309 [2024-12-08 05:17:02.877143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.309 [2024-12-08 05:17:02.881527] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.309 [2024-12-08 05:17:02.881568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.309 [2024-12-08 05:17:02.881583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.309 [2024-12-08 05:17:02.885942] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.309 [2024-12-08 05:17:02.885982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.309 [2024-12-08 05:17:02.885997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.309 [2024-12-08 05:17:02.890389] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.309 [2024-12-08 05:17:02.890429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.309 [2024-12-08 05:17:02.890443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.309 [2024-12-08 05:17:02.894772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.309 [2024-12-08 05:17:02.894811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.309 [2024-12-08 05:17:02.894825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.309 [2024-12-08 05:17:02.899087] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.309 [2024-12-08 05:17:02.899127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.309 [2024-12-08 05:17:02.899141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.309 [2024-12-08 05:17:02.903394] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.309 [2024-12-08 05:17:02.903438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.309 [2024-12-08 05:17:02.903453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.309 [2024-12-08 05:17:02.907756] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.309 [2024-12-08 05:17:02.907795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.309 [2024-12-08 05:17:02.907808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.309 [2024-12-08 05:17:02.912141] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.309 [2024-12-08 05:17:02.912180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.309 [2024-12-08 05:17:02.912195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.309 [2024-12-08 05:17:02.916490] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.309 [2024-12-08 05:17:02.916530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.309 [2024-12-08 05:17:02.916544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.309 [2024-12-08 05:17:02.920920] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.309 [2024-12-08 05:17:02.920959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.309 [2024-12-08 05:17:02.920973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.309 [2024-12-08 05:17:02.925309] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.309 [2024-12-08 05:17:02.925469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.309 [2024-12-08 05:17:02.925488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.309 [2024-12-08 05:17:02.929857] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.310 [2024-12-08 05:17:02.929897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.310 [2024-12-08 05:17:02.929912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.310 [2024-12-08 05:17:02.934341] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.310 [2024-12-08 05:17:02.934384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.310 [2024-12-08 05:17:02.934398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.310 [2024-12-08 05:17:02.938802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.310 [2024-12-08 05:17:02.938859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.310 [2024-12-08 05:17:02.938874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.310 [2024-12-08 05:17:02.943271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.310 [2024-12-08 05:17:02.943315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.310 [2024-12-08 05:17:02.943330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.310 [2024-12-08 05:17:02.947634] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.310 [2024-12-08 05:17:02.947688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.310 [2024-12-08 05:17:02.947704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.310 [2024-12-08 05:17:02.952038] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.310 [2024-12-08 05:17:02.952196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.310 [2024-12-08 05:17:02.952215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.310 [2024-12-08 05:17:02.956562] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.310 [2024-12-08 05:17:02.956604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.310 [2024-12-08 05:17:02.956618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.310 [2024-12-08 05:17:02.961017] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.310 [2024-12-08 05:17:02.961056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.310 [2024-12-08 05:17:02.961072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.310 [2024-12-08 05:17:02.965366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.310 [2024-12-08 05:17:02.965408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.310 [2024-12-08 05:17:02.965422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.310 [2024-12-08 05:17:02.969726] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.310 [2024-12-08 05:17:02.969765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.310 [2024-12-08 05:17:02.969779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.310 [2024-12-08 05:17:02.974152] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.310 [2024-12-08 05:17:02.974196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.310 [2024-12-08 05:17:02.974211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.310 [2024-12-08 05:17:02.978549] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.310 [2024-12-08 05:17:02.978589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.310 [2024-12-08 05:17:02.978603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.310 [2024-12-08 05:17:02.982959] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.310 [2024-12-08 05:17:02.982999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.310 [2024-12-08 05:17:02.983013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.310 [2024-12-08 05:17:02.987416] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.310 [2024-12-08 05:17:02.987458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.310 [2024-12-08 05:17:02.987473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.310 [2024-12-08 05:17:02.991806] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.310 [2024-12-08 05:17:02.991846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.310 [2024-12-08 05:17:02.991860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.310 [2024-12-08 05:17:02.996385] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.310 [2024-12-08 05:17:02.996435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.310 [2024-12-08 05:17:02.996450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.310 [2024-12-08 05:17:03.000875] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.310 [2024-12-08 05:17:03.000916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.310 [2024-12-08 05:17:03.000931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.310 [2024-12-08 05:17:03.005252] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.310 [2024-12-08 05:17:03.005295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.310 [2024-12-08 05:17:03.005309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.310 [2024-12-08 05:17:03.010509] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.310 [2024-12-08 05:17:03.010710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.310 [2024-12-08 05:17:03.010731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.310 [2024-12-08 05:17:03.015579] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.310 [2024-12-08 05:17:03.015627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.310 [2024-12-08 05:17:03.015642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.310 [2024-12-08 05:17:03.020048] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.310 [2024-12-08 05:17:03.020092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.310 [2024-12-08 05:17:03.020107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.310 [2024-12-08 05:17:03.024541] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.310 [2024-12-08 05:17:03.024584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.310 [2024-12-08 05:17:03.024599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.310 [2024-12-08 05:17:03.028976] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.310 [2024-12-08 05:17:03.029022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.310 [2024-12-08 05:17:03.029037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.310 [2024-12-08 05:17:03.033451] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.310 [2024-12-08 05:17:03.033492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.310 [2024-12-08 05:17:03.033507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.310 [2024-12-08 05:17:03.037929] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.310 [2024-12-08 05:17:03.037970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.310 [2024-12-08 05:17:03.037984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.310 [2024-12-08 05:17:03.042348] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.310 [2024-12-08 05:17:03.042392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.310 [2024-12-08 05:17:03.042406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.310 [2024-12-08 05:17:03.046661] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.310 [2024-12-08 05:17:03.046708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.310 [2024-12-08 05:17:03.046722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.310 [2024-12-08 05:17:03.051007] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.310 [2024-12-08 05:17:03.051046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.310 [2024-12-08 05:17:03.051061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.310 [2024-12-08 05:17:03.055310] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.310 [2024-12-08 05:17:03.055349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.310 [2024-12-08 05:17:03.055363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.310 [2024-12-08 05:17:03.059790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.310 [2024-12-08 05:17:03.059829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.310 [2024-12-08 05:17:03.059844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.310 [2024-12-08 05:17:03.064155] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.310 [2024-12-08 05:17:03.064196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.310 [2024-12-08 05:17:03.064211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.310 [2024-12-08 05:17:03.068533] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.310 [2024-12-08 05:17:03.068574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.310 [2024-12-08 05:17:03.068588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.310 [2024-12-08 05:17:03.072922] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.310 [2024-12-08 05:17:03.072962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.310 [2024-12-08 05:17:03.072978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.310 [2024-12-08 05:17:03.077342] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.310 [2024-12-08 05:17:03.077382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.310 [2024-12-08 05:17:03.077396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.310 [2024-12-08 05:17:03.081848] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.310 [2024-12-08 05:17:03.081888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.310 [2024-12-08 05:17:03.081901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.310 [2024-12-08 05:17:03.086291] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.310 [2024-12-08 05:17:03.086331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.310 [2024-12-08 05:17:03.086345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.310 [2024-12-08 05:17:03.090810] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.310 [2024-12-08 05:17:03.090849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.310 [2024-12-08 05:17:03.090863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.570 [2024-12-08 05:17:03.095278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.570 [2024-12-08 05:17:03.095324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.570 [2024-12-08 05:17:03.095338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.570 [2024-12-08 05:17:03.099638] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.570 [2024-12-08 05:17:03.099694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.570 [2024-12-08 05:17:03.099710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.570 [2024-12-08 05:17:03.104050] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.570 [2024-12-08 05:17:03.104214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.570 [2024-12-08 05:17:03.104233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.570 [2024-12-08 05:17:03.108638] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.570 [2024-12-08 05:17:03.108693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.570 [2024-12-08 05:17:03.108709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.570 [2024-12-08 05:17:03.112997] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.570 [2024-12-08 05:17:03.113037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.570 [2024-12-08 05:17:03.113052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.570 [2024-12-08 05:17:03.117423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.570 [2024-12-08 05:17:03.117464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.570 [2024-12-08 05:17:03.117479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.570 [2024-12-08 05:17:03.121755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.570 [2024-12-08 05:17:03.121793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.570 [2024-12-08 05:17:03.121807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.570 [2024-12-08 05:17:03.126205] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.570 [2024-12-08 05:17:03.126245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.570 [2024-12-08 05:17:03.126259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.570 [2024-12-08 05:17:03.130576] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.570 [2024-12-08 05:17:03.130616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.570 [2024-12-08 05:17:03.130630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.570 [2024-12-08 05:17:03.134968] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.570 [2024-12-08 05:17:03.135008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.570 [2024-12-08 05:17:03.135022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.570 [2024-12-08 05:17:03.139327] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.570 [2024-12-08 05:17:03.139368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.570 [2024-12-08 05:17:03.139391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.570 [2024-12-08 05:17:03.143772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.570 [2024-12-08 05:17:03.143811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.570 [2024-12-08 05:17:03.143825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.570 [2024-12-08 05:17:03.148217] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.570 [2024-12-08 05:17:03.148260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.570 [2024-12-08 05:17:03.148275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.570 [2024-12-08 05:17:03.152637] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.570 [2024-12-08 05:17:03.152691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.570 [2024-12-08 05:17:03.152708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.570 [2024-12-08 05:17:03.157001] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.570 [2024-12-08 05:17:03.157042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.570 [2024-12-08 05:17:03.157056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.570 [2024-12-08 05:17:03.161410] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.570 [2024-12-08 05:17:03.161453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.570 [2024-12-08 05:17:03.161468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.570 [2024-12-08 05:17:03.165802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.570 [2024-12-08 05:17:03.165842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.570 [2024-12-08 05:17:03.165856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.570 [2024-12-08 05:17:03.170159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.570 [2024-12-08 05:17:03.170199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.570 [2024-12-08 05:17:03.170213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.570 [2024-12-08 05:17:03.174773] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.570 [2024-12-08 05:17:03.174810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.570 [2024-12-08 05:17:03.174824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.570 [2024-12-08 05:17:03.179275] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.570 [2024-12-08 05:17:03.179315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.570 [2024-12-08 05:17:03.179329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.570 [2024-12-08 05:17:03.183862] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.570 [2024-12-08 05:17:03.183902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.570 [2024-12-08 05:17:03.183915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.570 [2024-12-08 05:17:03.188222] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.570 [2024-12-08 05:17:03.188262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.570 [2024-12-08 05:17:03.188277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.570 [2024-12-08 05:17:03.192643] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.570 [2024-12-08 05:17:03.192696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.570 [2024-12-08 05:17:03.192712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.570 [2024-12-08 05:17:03.197025] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.570 [2024-12-08 05:17:03.197198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.570 [2024-12-08 05:17:03.197217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.570 [2024-12-08 05:17:03.201967] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.570 [2024-12-08 05:17:03.202143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.571 [2024-12-08 05:17:03.202339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.571 [2024-12-08 05:17:03.207149] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.571 [2024-12-08 05:17:03.207327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.571 [2024-12-08 05:17:03.207551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.571 [2024-12-08 05:17:03.212434] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.571 [2024-12-08 05:17:03.212611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.571 [2024-12-08 05:17:03.212763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.571 [2024-12-08 05:17:03.217754] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.571 [2024-12-08 05:17:03.217928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.571 [2024-12-08 05:17:03.218057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.571 [2024-12-08 05:17:03.222971] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.571 [2024-12-08 05:17:03.223143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.571 [2024-12-08 05:17:03.223268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.571 [2024-12-08 05:17:03.228160] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.571 [2024-12-08 05:17:03.228334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.571 [2024-12-08 05:17:03.228460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.571 [2024-12-08 05:17:03.233133] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.571 [2024-12-08 05:17:03.233308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.571 [2024-12-08 05:17:03.233440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.571 [2024-12-08 05:17:03.237988] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.571 [2024-12-08 05:17:03.238162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.571 [2024-12-08 05:17:03.238388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.571 [2024-12-08 05:17:03.242940] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.571 [2024-12-08 05:17:03.243117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.571 [2024-12-08 05:17:03.243239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.571 [2024-12-08 05:17:03.247726] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.571 [2024-12-08 05:17:03.247901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.571 [2024-12-08 05:17:03.248005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.571 [2024-12-08 05:17:03.252578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.571 [2024-12-08 05:17:03.252621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.571 [2024-12-08 05:17:03.252636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.571 [2024-12-08 05:17:03.256945] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.571 [2024-12-08 05:17:03.256986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.571 [2024-12-08 05:17:03.257000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.571 [2024-12-08 05:17:03.261394] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.571 [2024-12-08 05:17:03.261434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.571 [2024-12-08 05:17:03.261449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.571 [2024-12-08 05:17:03.265796] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.571 [2024-12-08 05:17:03.265835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.571 [2024-12-08 05:17:03.265849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.571 [2024-12-08 05:17:03.270136] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.571 [2024-12-08 05:17:03.270177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.571 [2024-12-08 05:17:03.270191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.571 [2024-12-08 05:17:03.274513] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.571 [2024-12-08 05:17:03.274553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.571 [2024-12-08 05:17:03.274568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.571 [2024-12-08 05:17:03.278883] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.571 [2024-12-08 05:17:03.278922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.571 [2024-12-08 05:17:03.278936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.571 [2024-12-08 05:17:03.283272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.571 [2024-12-08 05:17:03.283311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.571 [2024-12-08 05:17:03.283326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.571 [2024-12-08 05:17:03.287670] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.571 [2024-12-08 05:17:03.287721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.571 [2024-12-08 05:17:03.287735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.571 [2024-12-08 05:17:03.292307] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.571 [2024-12-08 05:17:03.292347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.571 [2024-12-08 05:17:03.292361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.571 [2024-12-08 05:17:03.296854] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.571 [2024-12-08 05:17:03.296894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.571 [2024-12-08 05:17:03.296908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.571 [2024-12-08 05:17:03.301297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.572 [2024-12-08 05:17:03.301338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.572 [2024-12-08 05:17:03.301352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.572 [2024-12-08 05:17:03.306031] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.572 [2024-12-08 05:17:03.306072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.572 [2024-12-08 05:17:03.306087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.572 [2024-12-08 05:17:03.310450] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.572 [2024-12-08 05:17:03.310490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.572 [2024-12-08 05:17:03.310505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.572 [2024-12-08 05:17:03.314855] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.572 [2024-12-08 05:17:03.314898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.572 [2024-12-08 05:17:03.314913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.572 [2024-12-08 05:17:03.319226] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.572 [2024-12-08 05:17:03.319266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.572 [2024-12-08 05:17:03.319281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.572 [2024-12-08 05:17:03.323745] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.572 [2024-12-08 05:17:03.323784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.572 [2024-12-08 05:17:03.323799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.572 [2024-12-08 05:17:03.328157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.572 [2024-12-08 05:17:03.328197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.572 [2024-12-08 05:17:03.328211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.572 [2024-12-08 05:17:03.332534] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.572 [2024-12-08 05:17:03.332575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.572 [2024-12-08 05:17:03.332589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.572 [2024-12-08 05:17:03.336865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.572 [2024-12-08 05:17:03.336905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.572 [2024-12-08 05:17:03.336919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.572 [2024-12-08 05:17:03.341217] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.572 [2024-12-08 05:17:03.341257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.572 [2024-12-08 05:17:03.341272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.572 [2024-12-08 05:17:03.345584] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.572 [2024-12-08 05:17:03.345624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.572 [2024-12-08 05:17:03.345638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.572 [2024-12-08 05:17:03.349934] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.572 [2024-12-08 05:17:03.349974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.572 [2024-12-08 05:17:03.349988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.832 [2024-12-08 05:17:03.354403] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.832 [2024-12-08 05:17:03.354444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.832 [2024-12-08 05:17:03.354458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.832 [2024-12-08 05:17:03.358877] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.832 [2024-12-08 05:17:03.358928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.832 [2024-12-08 05:17:03.358943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.832 [2024-12-08 05:17:03.363255] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.832 [2024-12-08 05:17:03.363298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.832 [2024-12-08 05:17:03.363313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.832 [2024-12-08 05:17:03.367895] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.832 [2024-12-08 05:17:03.367936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.832 [2024-12-08 05:17:03.367951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.832 [2024-12-08 05:17:03.372310] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.832 [2024-12-08 05:17:03.372350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.832 [2024-12-08 05:17:03.372364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.832 [2024-12-08 05:17:03.376733] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.832 [2024-12-08 05:17:03.376772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.832 [2024-12-08 05:17:03.376787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.832 [2024-12-08 05:17:03.381030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.832 [2024-12-08 05:17:03.381069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.832 [2024-12-08 05:17:03.381084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.832 [2024-12-08 05:17:03.385305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.832 [2024-12-08 05:17:03.385345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.833 [2024-12-08 05:17:03.385359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.833 [2024-12-08 05:17:03.389700] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.833 [2024-12-08 05:17:03.389739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.833 [2024-12-08 05:17:03.389753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.833 [2024-12-08 05:17:03.394040] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.833 [2024-12-08 05:17:03.394079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.833 [2024-12-08 05:17:03.394093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.833 [2024-12-08 05:17:03.398342] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.833 [2024-12-08 05:17:03.398383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.833 [2024-12-08 05:17:03.398398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.833 [2024-12-08 05:17:03.402652] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.833 [2024-12-08 05:17:03.402703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.833 [2024-12-08 05:17:03.402717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.833 [2024-12-08 05:17:03.406991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.833 [2024-12-08 05:17:03.407032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.833 [2024-12-08 05:17:03.407047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.833 [2024-12-08 05:17:03.411364] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.833 [2024-12-08 05:17:03.411412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.833 [2024-12-08 05:17:03.411426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.833 [2024-12-08 05:17:03.415738] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.833 [2024-12-08 05:17:03.415776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.833 [2024-12-08 05:17:03.415791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.833 [2024-12-08 05:17:03.420043] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.833 [2024-12-08 05:17:03.420082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.833 [2024-12-08 05:17:03.420095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.833 [2024-12-08 05:17:03.424417] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.833 [2024-12-08 05:17:03.424457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.833 [2024-12-08 05:17:03.424471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.833 [2024-12-08 05:17:03.428806] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.833 [2024-12-08 05:17:03.428846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.833 [2024-12-08 05:17:03.428861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.833 [2024-12-08 05:17:03.433203] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.833 [2024-12-08 05:17:03.433244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.833 [2024-12-08 05:17:03.433258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.833 [2024-12-08 05:17:03.437588] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.833 [2024-12-08 05:17:03.437628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.833 [2024-12-08 05:17:03.437643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.833 [2024-12-08 05:17:03.442012] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.833 [2024-12-08 05:17:03.442051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.833 [2024-12-08 05:17:03.442065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.833 [2024-12-08 05:17:03.446424] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.833 [2024-12-08 05:17:03.446464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.833 [2024-12-08 05:17:03.446478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.833 [2024-12-08 05:17:03.450829] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.833 [2024-12-08 05:17:03.450868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.833 [2024-12-08 05:17:03.450882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.833 [2024-12-08 05:17:03.455137] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.833 [2024-12-08 05:17:03.455177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.833 [2024-12-08 05:17:03.455191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.833 [2024-12-08 05:17:03.459577] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.833 [2024-12-08 05:17:03.459619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.833 [2024-12-08 05:17:03.459634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.833 [2024-12-08 05:17:03.463971] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.833 [2024-12-08 05:17:03.464011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.833 [2024-12-08 05:17:03.464025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.833 [2024-12-08 05:17:03.468420] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.833 [2024-12-08 05:17:03.468580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.833 [2024-12-08 05:17:03.468597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.833 [2024-12-08 05:17:03.472979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.833 [2024-12-08 05:17:03.473019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.833 [2024-12-08 05:17:03.473034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.833 [2024-12-08 05:17:03.477362] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.833 [2024-12-08 05:17:03.477402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.833 [2024-12-08 05:17:03.477417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.833 [2024-12-08 05:17:03.481904] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.833 [2024-12-08 05:17:03.481943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.833 [2024-12-08 05:17:03.481956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.833 [2024-12-08 05:17:03.486228] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.833 [2024-12-08 05:17:03.486269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.833 [2024-12-08 05:17:03.486283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.833 [2024-12-08 05:17:03.490664] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.833 [2024-12-08 05:17:03.490715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.833 [2024-12-08 05:17:03.490730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.833 [2024-12-08 05:17:03.494987] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.833 [2024-12-08 05:17:03.495027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.833 [2024-12-08 05:17:03.495042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.833 [2024-12-08 05:17:03.499354] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.833 [2024-12-08 05:17:03.499402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.833 [2024-12-08 05:17:03.499416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.833 [2024-12-08 05:17:03.503757] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.834 [2024-12-08 05:17:03.503797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.834 [2024-12-08 05:17:03.503811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.834 [2024-12-08 05:17:03.508122] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.834 [2024-12-08 05:17:03.508162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.834 [2024-12-08 05:17:03.508175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.834 [2024-12-08 05:17:03.512503] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.834 [2024-12-08 05:17:03.512544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.834 [2024-12-08 05:17:03.512558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.834 [2024-12-08 05:17:03.516898] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.834 [2024-12-08 05:17:03.516938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.834 [2024-12-08 05:17:03.516952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.834 [2024-12-08 05:17:03.521320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.834 [2024-12-08 05:17:03.521361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.834 [2024-12-08 05:17:03.521375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.834 [2024-12-08 05:17:03.525748] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.834 [2024-12-08 05:17:03.525787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.834 [2024-12-08 05:17:03.525802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.834 [2024-12-08 05:17:03.530155] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.834 [2024-12-08 05:17:03.530196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.834 [2024-12-08 05:17:03.530216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.834 [2024-12-08 05:17:03.534509] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.834 [2024-12-08 05:17:03.534549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.834 [2024-12-08 05:17:03.534564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.834 [2024-12-08 05:17:03.538919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.834 [2024-12-08 05:17:03.538959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.834 [2024-12-08 05:17:03.538973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.834 [2024-12-08 05:17:03.543154] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.834 [2024-12-08 05:17:03.543198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.834 [2024-12-08 05:17:03.543214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.834 [2024-12-08 05:17:03.547634] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.834 [2024-12-08 05:17:03.547690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.834 [2024-12-08 05:17:03.547706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.834 [2024-12-08 05:17:03.552123] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.834 [2024-12-08 05:17:03.552164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.834 [2024-12-08 05:17:03.552179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.834 [2024-12-08 05:17:03.556600] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.834 [2024-12-08 05:17:03.556640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.834 [2024-12-08 05:17:03.556655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.834 [2024-12-08 05:17:03.561026] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.834 [2024-12-08 05:17:03.561067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.834 [2024-12-08 05:17:03.561081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.834 [2024-12-08 05:17:03.565459] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.834 [2024-12-08 05:17:03.565502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.834 [2024-12-08 05:17:03.565517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.834 [2024-12-08 05:17:03.569938] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.834 [2024-12-08 05:17:03.569979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.834 [2024-12-08 05:17:03.569993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.834 [2024-12-08 05:17:03.574343] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.834 [2024-12-08 05:17:03.574383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.834 [2024-12-08 05:17:03.574398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.834 [2024-12-08 05:17:03.578749] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.834 [2024-12-08 05:17:03.578789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.834 [2024-12-08 05:17:03.578803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.834 [2024-12-08 05:17:03.583126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.834 [2024-12-08 05:17:03.583168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.834 [2024-12-08 05:17:03.583182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.834 [2024-12-08 05:17:03.587578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.834 [2024-12-08 05:17:03.587618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.834 [2024-12-08 05:17:03.587632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.834 [2024-12-08 05:17:03.592031] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.834 [2024-12-08 05:17:03.592071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.834 [2024-12-08 05:17:03.592086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.834 [2024-12-08 05:17:03.596979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.834 [2024-12-08 05:17:03.597035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.834 [2024-12-08 05:17:03.597056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.834 [2024-12-08 05:17:03.602073] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.834 [2024-12-08 05:17:03.602123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.834 [2024-12-08 05:17:03.602139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.834 [2024-12-08 05:17:03.606617] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.834 [2024-12-08 05:17:03.606696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.834 [2024-12-08 05:17:03.606713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.834 [2024-12-08 05:17:03.611132] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:13.834 [2024-12-08 05:17:03.611177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.834 [2024-12-08 05:17:03.611191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.093 [2024-12-08 05:17:03.615668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:14.093 [2024-12-08 05:17:03.615723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.093 [2024-12-08 05:17:03.615738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.093 [2024-12-08 05:17:03.620286] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:14.093 [2024-12-08 05:17:03.620355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.093 [2024-12-08 05:17:03.620371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.093 [2024-12-08 05:17:03.624819] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:14.093 [2024-12-08 05:17:03.624886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.093 [2024-12-08 05:17:03.624902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.093 [2024-12-08 05:17:03.629205] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:14.093 [2024-12-08 05:17:03.629270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.093 [2024-12-08 05:17:03.629285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.093 [2024-12-08 05:17:03.633763] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:14.093 [2024-12-08 05:17:03.633824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.093 [2024-12-08 05:17:03.633839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.093 [2024-12-08 05:17:03.638415] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:14.093 [2024-12-08 05:17:03.638483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.093 [2024-12-08 05:17:03.638499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.094 [2024-12-08 05:17:03.643066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:14.094 [2024-12-08 05:17:03.643131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.094 [2024-12-08 05:17:03.643148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.094 [2024-12-08 05:17:03.647565] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:14.094 [2024-12-08 05:17:03.647630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.094 [2024-12-08 05:17:03.647645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.094 [2024-12-08 05:17:03.651977] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1297680) 00:18:14.094 [2024-12-08 05:17:03.652203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.094 [2024-12-08 05:17:03.652224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.094 00:18:14.094 Latency(us) 00:18:14.094 [2024-12-08T05:17:03.880Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:14.094 [2024-12-08T05:17:03.880Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:18:14.094 nvme0n1 : 2.00 6806.29 850.79 0.00 0.00 2347.63 1995.87 6642.97 00:18:14.094 [2024-12-08T05:17:03.880Z] =================================================================================================================== 00:18:14.094 [2024-12-08T05:17:03.880Z] Total : 6806.29 850.79 0.00 0.00 2347.63 1995.87 6642.97 00:18:14.094 0 00:18:14.094 05:17:03 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:14.094 05:17:03 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:14.094 | .driver_specific 00:18:14.094 | .nvme_error 00:18:14.094 | .status_code 00:18:14.094 | .command_transient_transport_error' 00:18:14.094 05:17:03 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:14.094 05:17:03 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:14.352 05:17:03 -- host/digest.sh@71 -- # (( 439 > 0 )) 00:18:14.352 05:17:03 -- host/digest.sh@73 -- # killprocess 84123 00:18:14.352 05:17:03 -- common/autotest_common.sh@936 -- # '[' -z 84123 ']' 00:18:14.352 05:17:03 -- common/autotest_common.sh@940 -- # kill -0 84123 00:18:14.352 05:17:03 -- common/autotest_common.sh@941 -- # uname 00:18:14.352 05:17:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:14.352 05:17:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84123 00:18:14.352 killing process with pid 84123 00:18:14.352 Received shutdown signal, test time was about 2.000000 seconds 00:18:14.352 00:18:14.352 Latency(us) 00:18:14.352 [2024-12-08T05:17:04.138Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:14.352 [2024-12-08T05:17:04.138Z] =================================================================================================================== 00:18:14.352 [2024-12-08T05:17:04.138Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:14.352 05:17:04 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:14.352 05:17:04 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:14.352 05:17:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84123' 00:18:14.352 05:17:04 -- common/autotest_common.sh@955 -- # kill 84123 00:18:14.352 05:17:04 -- common/autotest_common.sh@960 -- # wait 84123 00:18:14.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:14.611 05:17:04 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:18:14.611 05:17:04 -- host/digest.sh@54 -- # local rw bs qd 00:18:14.611 05:17:04 -- host/digest.sh@56 -- # rw=randwrite 00:18:14.611 05:17:04 -- host/digest.sh@56 -- # bs=4096 00:18:14.611 05:17:04 -- host/digest.sh@56 -- # qd=128 00:18:14.611 05:17:04 -- host/digest.sh@58 -- # bperfpid=84190 00:18:14.611 05:17:04 -- host/digest.sh@60 -- # waitforlisten 84190 /var/tmp/bperf.sock 00:18:14.611 05:17:04 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:18:14.611 05:17:04 -- common/autotest_common.sh@829 -- # '[' -z 84190 ']' 00:18:14.611 05:17:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:14.611 05:17:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:14.611 05:17:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:14.611 05:17:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:14.611 05:17:04 -- common/autotest_common.sh@10 -- # set +x 00:18:14.611 [2024-12-08 05:17:04.219359] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:14.611 [2024-12-08 05:17:04.220331] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84190 ] 00:18:14.611 [2024-12-08 05:17:04.368324] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.868 [2024-12-08 05:17:04.409339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:14.868 05:17:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:14.868 05:17:04 -- common/autotest_common.sh@862 -- # return 0 00:18:14.868 05:17:04 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:14.868 05:17:04 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:15.126 05:17:04 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:15.126 05:17:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.126 05:17:04 -- common/autotest_common.sh@10 -- # set +x 00:18:15.126 05:17:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.126 05:17:04 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:15.126 05:17:04 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:15.384 nvme0n1 00:18:15.384 05:17:05 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:18:15.384 05:17:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.384 05:17:05 -- common/autotest_common.sh@10 -- # set +x 00:18:15.384 05:17:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.384 05:17:05 -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:15.384 05:17:05 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:15.641 Running I/O for 2 seconds... 00:18:15.641 [2024-12-08 05:17:05.268887] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190ddc00 00:18:15.641 [2024-12-08 05:17:05.270280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.641 [2024-12-08 05:17:05.270328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.641 [2024-12-08 05:17:05.285229] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190fef90 00:18:15.641 [2024-12-08 05:17:05.286592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.641 [2024-12-08 05:17:05.286633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.641 [2024-12-08 05:17:05.301532] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190ff3c8 00:18:15.641 [2024-12-08 05:17:05.302918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.641 [2024-12-08 05:17:05.302956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:15.641 [2024-12-08 05:17:05.317905] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190feb58 00:18:15.641 [2024-12-08 05:17:05.319252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.641 [2024-12-08 05:17:05.319290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:15.641 [2024-12-08 05:17:05.334314] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190fe720 00:18:15.641 [2024-12-08 05:17:05.335661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.641 [2024-12-08 05:17:05.335711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:15.641 [2024-12-08 05:17:05.350634] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190fe2e8 00:18:15.641 [2024-12-08 05:17:05.351982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.641 [2024-12-08 05:17:05.352021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:15.641 [2024-12-08 05:17:05.366918] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190fdeb0 00:18:15.641 [2024-12-08 05:17:05.368252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.641 [2024-12-08 05:17:05.368290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:15.641 [2024-12-08 05:17:05.383171] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190fda78 00:18:15.641 [2024-12-08 05:17:05.384490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.641 [2024-12-08 05:17:05.384534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:15.641 [2024-12-08 05:17:05.399411] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190fd640 00:18:15.641 [2024-12-08 05:17:05.400727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:17637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.641 [2024-12-08 05:17:05.400765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:15.641 [2024-12-08 05:17:05.416421] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190fd208 00:18:15.641 [2024-12-08 05:17:05.417906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.641 [2024-12-08 05:17:05.417940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:15.898 [2024-12-08 05:17:05.433180] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190fcdd0 00:18:15.898 [2024-12-08 05:17:05.434491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.898 [2024-12-08 05:17:05.434656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:15.898 [2024-12-08 05:17:05.449664] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190fc998 00:18:15.898 [2024-12-08 05:17:05.450940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.898 [2024-12-08 05:17:05.450979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:15.898 [2024-12-08 05:17:05.466228] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190fc560 00:18:15.898 [2024-12-08 05:17:05.467512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.898 [2024-12-08 05:17:05.467552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:15.898 [2024-12-08 05:17:05.482613] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190fc128 00:18:15.898 [2024-12-08 05:17:05.483888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.898 [2024-12-08 05:17:05.483927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:15.898 [2024-12-08 05:17:05.498963] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190fbcf0 00:18:15.898 [2024-12-08 05:17:05.500204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:17886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.898 [2024-12-08 05:17:05.500243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:15.898 [2024-12-08 05:17:05.515827] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190fb8b8 00:18:15.898 [2024-12-08 05:17:05.517073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.898 [2024-12-08 05:17:05.517233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:15.898 [2024-12-08 05:17:05.532188] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190fb480 00:18:15.898 [2024-12-08 05:17:05.533407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:9634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.898 [2024-12-08 05:17:05.533447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:15.898 [2024-12-08 05:17:05.548560] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190fb048 00:18:15.898 [2024-12-08 05:17:05.549778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.899 [2024-12-08 05:17:05.549816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:15.899 [2024-12-08 05:17:05.564845] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190fac10 00:18:15.899 [2024-12-08 05:17:05.566036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.899 [2024-12-08 05:17:05.566073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:15.899 [2024-12-08 05:17:05.581183] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190fa7d8 00:18:15.899 [2024-12-08 05:17:05.582364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:19706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.899 [2024-12-08 05:17:05.582402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:15.899 [2024-12-08 05:17:05.597560] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190fa3a0 00:18:15.899 [2024-12-08 05:17:05.598750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.899 [2024-12-08 05:17:05.598788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:15.899 [2024-12-08 05:17:05.613903] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190f9f68 00:18:15.899 [2024-12-08 05:17:05.615218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.899 [2024-12-08 05:17:05.615250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:15.899 [2024-12-08 05:17:05.630577] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190f9b30 00:18:15.899 [2024-12-08 05:17:05.632135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.899 [2024-12-08 05:17:05.632298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:15.899 [2024-12-08 05:17:05.647949] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190f96f8 00:18:15.899 [2024-12-08 05:17:05.649269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.899 [2024-12-08 05:17:05.649451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:15.899 [2024-12-08 05:17:05.664450] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190f92c0 00:18:15.899 [2024-12-08 05:17:05.665748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.899 [2024-12-08 05:17:05.665923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:15.899 [2024-12-08 05:17:05.680926] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190f8e88 00:18:15.899 [2024-12-08 05:17:05.682196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.899 [2024-12-08 05:17:05.682373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:16.156 [2024-12-08 05:17:05.697586] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190f8a50 00:18:16.156 [2024-12-08 05:17:05.698872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:25442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.156 [2024-12-08 05:17:05.699045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:16.156 [2024-12-08 05:17:05.714092] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190f8618 00:18:16.156 [2024-12-08 05:17:05.715356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.156 [2024-12-08 05:17:05.715542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:16.156 [2024-12-08 05:17:05.730670] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190f81e0 00:18:16.156 [2024-12-08 05:17:05.731938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.156 [2024-12-08 05:17:05.732120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:16.156 [2024-12-08 05:17:05.748298] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190f7da8 00:18:16.156 [2024-12-08 05:17:05.749566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.156 [2024-12-08 05:17:05.749763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:16.156 [2024-12-08 05:17:05.765339] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190f7970 00:18:16.156 [2024-12-08 05:17:05.766612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.156 [2024-12-08 05:17:05.766803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:16.156 [2024-12-08 05:17:05.782799] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190f7538 00:18:16.156 [2024-12-08 05:17:05.784035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.156 [2024-12-08 05:17:05.784189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:16.156 [2024-12-08 05:17:05.799363] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190f7100 00:18:16.156 [2024-12-08 05:17:05.800425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.156 [2024-12-08 05:17:05.800465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:16.156 [2024-12-08 05:17:05.815848] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190f6cc8 00:18:16.156 [2024-12-08 05:17:05.816914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.156 [2024-12-08 05:17:05.816957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:16.156 [2024-12-08 05:17:05.832284] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190f6890 00:18:16.156 [2024-12-08 05:17:05.833328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.156 [2024-12-08 05:17:05.833366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:16.156 [2024-12-08 05:17:05.848772] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190f6458 00:18:16.156 [2024-12-08 05:17:05.849788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.156 [2024-12-08 05:17:05.849949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:16.156 [2024-12-08 05:17:05.865397] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190f6020 00:18:16.156 [2024-12-08 05:17:05.866413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.156 [2024-12-08 05:17:05.866456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:16.156 [2024-12-08 05:17:05.882719] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190f5be8 00:18:16.156 [2024-12-08 05:17:05.883997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:8459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.156 [2024-12-08 05:17:05.884037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:16.156 [2024-12-08 05:17:05.899706] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190f57b0 00:18:16.156 [2024-12-08 05:17:05.900706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:10936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.156 [2024-12-08 05:17:05.900865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:16.156 [2024-12-08 05:17:05.916181] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190f5378 00:18:16.156 [2024-12-08 05:17:05.917163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:16514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.156 [2024-12-08 05:17:05.917202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:16.156 [2024-12-08 05:17:05.932936] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190f4f40 00:18:16.156 [2024-12-08 05:17:05.933916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.156 [2024-12-08 05:17:05.933953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:16.414 [2024-12-08 05:17:05.949309] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190f4b08 00:18:16.414 [2024-12-08 05:17:05.950377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.414 [2024-12-08 05:17:05.950413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:16.414 [2024-12-08 05:17:05.965927] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190f46d0 00:18:16.414 [2024-12-08 05:17:05.967031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.414 [2024-12-08 05:17:05.967071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:16.414 [2024-12-08 05:17:05.982421] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190f4298 00:18:16.414 [2024-12-08 05:17:05.983365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:24151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.414 [2024-12-08 05:17:05.983420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:16.414 [2024-12-08 05:17:05.998837] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190f3e60 00:18:16.414 [2024-12-08 05:17:05.999772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.414 [2024-12-08 05:17:05.999930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:16.414 [2024-12-08 05:17:06.016006] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190f3a28 00:18:16.414 [2024-12-08 05:17:06.016947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.414 [2024-12-08 05:17:06.016996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:16.414 [2024-12-08 05:17:06.033110] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190f35f0 00:18:16.414 [2024-12-08 05:17:06.034178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.414 [2024-12-08 05:17:06.034215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:16.414 [2024-12-08 05:17:06.049864] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190f31b8 00:18:16.414 [2024-12-08 05:17:06.050770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.414 [2024-12-08 05:17:06.050930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:16.414 [2024-12-08 05:17:06.067267] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190f2d80 00:18:16.414 [2024-12-08 05:17:06.068478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:3164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.414 [2024-12-08 05:17:06.068536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:16.414 [2024-12-08 05:17:06.084942] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190f2948 00:18:16.414 [2024-12-08 05:17:06.085840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.414 [2024-12-08 05:17:06.086001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:16.414 [2024-12-08 05:17:06.101503] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190f2510 00:18:16.414 [2024-12-08 05:17:06.102379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.414 [2024-12-08 05:17:06.102424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:16.414 [2024-12-08 05:17:06.117965] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190f20d8 00:18:16.414 [2024-12-08 05:17:06.118831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.414 [2024-12-08 05:17:06.118870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:16.414 [2024-12-08 05:17:06.134296] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190f1ca0 00:18:16.414 [2024-12-08 05:17:06.135154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:14055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.414 [2024-12-08 05:17:06.135194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:16.414 [2024-12-08 05:17:06.150647] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190f1868 00:18:16.414 [2024-12-08 05:17:06.151493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.414 [2024-12-08 05:17:06.151532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:16.414 [2024-12-08 05:17:06.167958] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190f1430 00:18:16.414 [2024-12-08 05:17:06.169119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:8453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.414 [2024-12-08 05:17:06.169285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:16.414 [2024-12-08 05:17:06.185259] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190f0ff8 00:18:16.414 [2024-12-08 05:17:06.186091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.414 [2024-12-08 05:17:06.186131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:16.671 [2024-12-08 05:17:06.202180] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190f0bc0 00:18:16.671 [2024-12-08 05:17:06.203024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:9815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.671 [2024-12-08 05:17:06.203065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:16.671 [2024-12-08 05:17:06.220771] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190f0788 00:18:16.671 [2024-12-08 05:17:06.221764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.671 [2024-12-08 05:17:06.221804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:16.671 [2024-12-08 05:17:06.239770] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190f0350 00:18:16.671 [2024-12-08 05:17:06.240742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:15324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.671 [2024-12-08 05:17:06.240783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:16.671 [2024-12-08 05:17:06.257065] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190eff18 00:18:16.671 [2024-12-08 05:17:06.257848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:24101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.671 [2024-12-08 05:17:06.257887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:16.671 [2024-12-08 05:17:06.273442] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190efae0 00:18:16.671 [2024-12-08 05:17:06.274221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.671 [2024-12-08 05:17:06.274263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:16.671 [2024-12-08 05:17:06.289813] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190ef6a8 00:18:16.671 [2024-12-08 05:17:06.290548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.671 [2024-12-08 05:17:06.290588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:16.671 [2024-12-08 05:17:06.306182] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190ef270 00:18:16.671 [2024-12-08 05:17:06.306925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.671 [2024-12-08 05:17:06.306966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:16.672 [2024-12-08 05:17:06.322493] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190eee38 00:18:16.672 [2024-12-08 05:17:06.323226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.672 [2024-12-08 05:17:06.323265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:16.672 [2024-12-08 05:17:06.338817] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190eea00 00:18:16.672 [2024-12-08 05:17:06.339686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.672 [2024-12-08 05:17:06.339718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:16.672 [2024-12-08 05:17:06.355314] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190ee5c8 00:18:16.672 [2024-12-08 05:17:06.356034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.672 [2024-12-08 05:17:06.356194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:16.672 [2024-12-08 05:17:06.371699] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190ee190 00:18:16.672 [2024-12-08 05:17:06.372386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.672 [2024-12-08 05:17:06.372426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:16.672 [2024-12-08 05:17:06.388039] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190edd58 00:18:16.672 [2024-12-08 05:17:06.388726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.672 [2024-12-08 05:17:06.388763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:16.672 [2024-12-08 05:17:06.404368] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190ed920 00:18:16.672 [2024-12-08 05:17:06.405055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.672 [2024-12-08 05:17:06.405094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:16.672 [2024-12-08 05:17:06.420796] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190ed4e8 00:18:16.672 [2024-12-08 05:17:06.421452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.672 [2024-12-08 05:17:06.421491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:16.672 [2024-12-08 05:17:06.437171] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190ed0b0 00:18:16.672 [2024-12-08 05:17:06.437830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.672 [2024-12-08 05:17:06.437869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:16.672 [2024-12-08 05:17:06.454312] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190ecc78 00:18:16.672 [2024-12-08 05:17:06.455381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.672 [2024-12-08 05:17:06.455415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:16.928 [2024-12-08 05:17:06.473844] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190ec840 00:18:16.928 [2024-12-08 05:17:06.474625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.928 [2024-12-08 05:17:06.474667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:16.928 [2024-12-08 05:17:06.492990] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190ec408 00:18:16.928 [2024-12-08 05:17:06.493990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.928 [2024-12-08 05:17:06.494141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:16.928 [2024-12-08 05:17:06.512920] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190ebfd0 00:18:16.928 [2024-12-08 05:17:06.513722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:10629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.928 [2024-12-08 05:17:06.513764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:16.928 [2024-12-08 05:17:06.532846] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190ebb98 00:18:16.928 [2024-12-08 05:17:06.533619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.928 [2024-12-08 05:17:06.533809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:16.928 [2024-12-08 05:17:06.550083] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190eb760 00:18:16.928 [2024-12-08 05:17:06.550690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.928 [2024-12-08 05:17:06.550727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:16.928 [2024-12-08 05:17:06.566554] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190eb328 00:18:16.928 [2024-12-08 05:17:06.567148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.928 [2024-12-08 05:17:06.567195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:16.928 [2024-12-08 05:17:06.582944] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190eaef0 00:18:16.928 [2024-12-08 05:17:06.583528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.928 [2024-12-08 05:17:06.583572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:16.928 [2024-12-08 05:17:06.599288] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190eaab8 00:18:16.928 [2024-12-08 05:17:06.599871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.928 [2024-12-08 05:17:06.599922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:16.928 [2024-12-08 05:17:06.615638] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190ea680 00:18:16.928 [2024-12-08 05:17:06.616217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:14276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.928 [2024-12-08 05:17:06.616259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:16.928 [2024-12-08 05:17:06.631962] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190ea248 00:18:16.928 [2024-12-08 05:17:06.632500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.928 [2024-12-08 05:17:06.632544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:16.928 [2024-12-08 05:17:06.648300] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190e9e10 00:18:16.928 [2024-12-08 05:17:06.648844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.928 [2024-12-08 05:17:06.648890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:16.928 [2024-12-08 05:17:06.664661] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190e99d8 00:18:16.928 [2024-12-08 05:17:06.665193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:14368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.928 [2024-12-08 05:17:06.665237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:16.928 [2024-12-08 05:17:06.681193] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190e95a0 00:18:16.928 [2024-12-08 05:17:06.681717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:9900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.928 [2024-12-08 05:17:06.681759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:16.928 [2024-12-08 05:17:06.697537] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190e9168 00:18:16.928 [2024-12-08 05:17:06.698049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.928 [2024-12-08 05:17:06.698092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:17.185 [2024-12-08 05:17:06.713912] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190e8d30 00:18:17.185 [2024-12-08 05:17:06.714399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.185 [2024-12-08 05:17:06.714452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:17.185 [2024-12-08 05:17:06.730248] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190e88f8 00:18:17.185 [2024-12-08 05:17:06.730733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:19153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.185 [2024-12-08 05:17:06.730771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:17.185 [2024-12-08 05:17:06.746540] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190e84c0 00:18:17.185 [2024-12-08 05:17:06.747037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.185 [2024-12-08 05:17:06.747077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:17.185 [2024-12-08 05:17:06.762870] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190e8088 00:18:17.185 [2024-12-08 05:17:06.763331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.185 [2024-12-08 05:17:06.763382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:17.185 [2024-12-08 05:17:06.779159] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190e7c50 00:18:17.185 [2024-12-08 05:17:06.779619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.185 [2024-12-08 05:17:06.779658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:17.185 [2024-12-08 05:17:06.795474] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190e7818 00:18:17.185 [2024-12-08 05:17:06.795928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.185 [2024-12-08 05:17:06.795966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:17.185 [2024-12-08 05:17:06.811844] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190e73e0 00:18:17.185 [2024-12-08 05:17:06.812264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.185 [2024-12-08 05:17:06.812293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:17.185 [2024-12-08 05:17:06.828146] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190e6fa8 00:18:17.185 [2024-12-08 05:17:06.828560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:18677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.185 [2024-12-08 05:17:06.828589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:17.185 [2024-12-08 05:17:06.844498] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190e6b70 00:18:17.185 [2024-12-08 05:17:06.844920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.185 [2024-12-08 05:17:06.844973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:17.185 [2024-12-08 05:17:06.860889] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190e6738 00:18:17.185 [2024-12-08 05:17:06.861293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.185 [2024-12-08 05:17:06.861339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:17.185 [2024-12-08 05:17:06.877224] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190e6300 00:18:17.185 [2024-12-08 05:17:06.877616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.185 [2024-12-08 05:17:06.877646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:17.185 [2024-12-08 05:17:06.893603] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190e5ec8 00:18:17.185 [2024-12-08 05:17:06.893998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.185 [2024-12-08 05:17:06.894034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:17.185 [2024-12-08 05:17:06.909942] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190e5a90 00:18:17.185 [2024-12-08 05:17:06.910307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:9574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.185 [2024-12-08 05:17:06.910337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:17.185 [2024-12-08 05:17:06.926248] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190e5658 00:18:17.185 [2024-12-08 05:17:06.926602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.185 [2024-12-08 05:17:06.926631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:17.185 [2024-12-08 05:17:06.942558] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190e5220 00:18:17.185 [2024-12-08 05:17:06.942915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:9860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.185 [2024-12-08 05:17:06.942944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:17.185 [2024-12-08 05:17:06.958881] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190e4de8 00:18:17.185 [2024-12-08 05:17:06.959211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.185 [2024-12-08 05:17:06.959239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:17.443 [2024-12-08 05:17:06.975223] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190e49b0 00:18:17.443 [2024-12-08 05:17:06.975563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.443 [2024-12-08 05:17:06.975592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:17.443 [2024-12-08 05:17:06.991540] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190e4578 00:18:17.443 [2024-12-08 05:17:06.991874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.443 [2024-12-08 05:17:06.991903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:17.443 [2024-12-08 05:17:07.007873] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190e4140 00:18:17.443 [2024-12-08 05:17:07.008175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.443 [2024-12-08 05:17:07.008204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:17.443 [2024-12-08 05:17:07.024201] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190e3d08 00:18:17.443 [2024-12-08 05:17:07.024492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.443 [2024-12-08 05:17:07.024522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:17.443 [2024-12-08 05:17:07.040513] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190e38d0 00:18:17.443 [2024-12-08 05:17:07.040814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:3213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.443 [2024-12-08 05:17:07.040843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:17.443 [2024-12-08 05:17:07.056845] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190e3498 00:18:17.443 [2024-12-08 05:17:07.057114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.443 [2024-12-08 05:17:07.057144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:17.443 [2024-12-08 05:17:07.073138] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190e3060 00:18:17.443 [2024-12-08 05:17:07.073401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:14888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.443 [2024-12-08 05:17:07.073424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:17.443 [2024-12-08 05:17:07.089459] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190e2c28 00:18:17.443 [2024-12-08 05:17:07.089724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:25173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.443 [2024-12-08 05:17:07.089754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:17.443 [2024-12-08 05:17:07.105896] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190e27f0 00:18:17.443 [2024-12-08 05:17:07.106144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.443 [2024-12-08 05:17:07.106175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:17.443 [2024-12-08 05:17:07.122597] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190e23b8 00:18:17.443 [2024-12-08 05:17:07.122872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.443 [2024-12-08 05:17:07.122924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:17.443 [2024-12-08 05:17:07.139100] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190e1f80 00:18:17.443 [2024-12-08 05:17:07.139328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.443 [2024-12-08 05:17:07.139353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:17.443 [2024-12-08 05:17:07.155541] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190e1b48 00:18:17.443 [2024-12-08 05:17:07.155772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:10526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.443 [2024-12-08 05:17:07.155804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:17.443 [2024-12-08 05:17:07.171967] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190e1710 00:18:17.443 [2024-12-08 05:17:07.172329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.443 [2024-12-08 05:17:07.172365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:17.443 [2024-12-08 05:17:07.189433] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190e12d8 00:18:17.443 [2024-12-08 05:17:07.189635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.443 [2024-12-08 05:17:07.189668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:17.443 [2024-12-08 05:17:07.205882] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190e0ea0 00:18:17.443 [2024-12-08 05:17:07.206067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:17792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.443 [2024-12-08 05:17:07.206092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:17.443 [2024-12-08 05:17:07.222264] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190e0a68 00:18:17.443 [2024-12-08 05:17:07.222440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.443 [2024-12-08 05:17:07.222464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:17.701 [2024-12-08 05:17:07.238684] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182dd30) with pdu=0x2000190e0630 00:18:17.701 [2024-12-08 05:17:07.238849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:11419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.701 [2024-12-08 05:17:07.238872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:17.701 00:18:17.701 Latency(us) 00:18:17.701 [2024-12-08T05:17:07.487Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.701 [2024-12-08T05:17:07.487Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:17.701 nvme0n1 : 2.00 15203.68 59.39 0.00 0.00 8411.24 7626.01 23116.33 00:18:17.701 [2024-12-08T05:17:07.487Z] =================================================================================================================== 00:18:17.701 [2024-12-08T05:17:07.487Z] Total : 15203.68 59.39 0.00 0.00 8411.24 7626.01 23116.33 00:18:17.701 0 00:18:17.701 05:17:07 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:17.701 05:17:07 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:17.701 05:17:07 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:17.701 05:17:07 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:17.701 | .driver_specific 00:18:17.701 | .nvme_error 00:18:17.701 | .status_code 00:18:17.701 | .command_transient_transport_error' 00:18:17.958 05:17:07 -- host/digest.sh@71 -- # (( 119 > 0 )) 00:18:17.958 05:17:07 -- host/digest.sh@73 -- # killprocess 84190 00:18:17.958 05:17:07 -- common/autotest_common.sh@936 -- # '[' -z 84190 ']' 00:18:17.958 05:17:07 -- common/autotest_common.sh@940 -- # kill -0 84190 00:18:17.958 05:17:07 -- common/autotest_common.sh@941 -- # uname 00:18:17.958 05:17:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:17.958 05:17:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84190 00:18:17.958 killing process with pid 84190 00:18:17.958 Received shutdown signal, test time was about 2.000000 seconds 00:18:17.958 00:18:17.959 Latency(us) 00:18:17.959 [2024-12-08T05:17:07.745Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.959 [2024-12-08T05:17:07.745Z] =================================================================================================================== 00:18:17.959 [2024-12-08T05:17:07.745Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:17.959 05:17:07 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:17.959 05:17:07 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:17.959 05:17:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84190' 00:18:17.959 05:17:07 -- common/autotest_common.sh@955 -- # kill 84190 00:18:17.959 05:17:07 -- common/autotest_common.sh@960 -- # wait 84190 00:18:17.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:17.959 05:17:07 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:18:17.959 05:17:07 -- host/digest.sh@54 -- # local rw bs qd 00:18:17.959 05:17:07 -- host/digest.sh@56 -- # rw=randwrite 00:18:17.959 05:17:07 -- host/digest.sh@56 -- # bs=131072 00:18:17.959 05:17:07 -- host/digest.sh@56 -- # qd=16 00:18:17.959 05:17:07 -- host/digest.sh@58 -- # bperfpid=84237 00:18:17.959 05:17:07 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:18:17.959 05:17:07 -- host/digest.sh@60 -- # waitforlisten 84237 /var/tmp/bperf.sock 00:18:17.959 05:17:07 -- common/autotest_common.sh@829 -- # '[' -z 84237 ']' 00:18:17.959 05:17:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:17.959 05:17:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:17.959 05:17:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:17.959 05:17:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:17.959 05:17:07 -- common/autotest_common.sh@10 -- # set +x 00:18:18.223 [2024-12-08 05:17:07.751638] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:18.223 [2024-12-08 05:17:07.751780] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84237 ] 00:18:18.223 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:18.223 Zero copy mechanism will not be used. 00:18:18.223 [2024-12-08 05:17:07.893121] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.223 [2024-12-08 05:17:07.927096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:19.169 05:17:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:19.169 05:17:08 -- common/autotest_common.sh@862 -- # return 0 00:18:19.169 05:17:08 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:19.169 05:17:08 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:19.426 05:17:09 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:19.426 05:17:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.426 05:17:09 -- common/autotest_common.sh@10 -- # set +x 00:18:19.426 05:17:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.426 05:17:09 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:19.426 05:17:09 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:19.685 nvme0n1 00:18:19.685 05:17:09 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:18:19.685 05:17:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.685 05:17:09 -- common/autotest_common.sh@10 -- # set +x 00:18:19.685 05:17:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.685 05:17:09 -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:19.685 05:17:09 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:19.685 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:19.685 Zero copy mechanism will not be used. 00:18:19.685 Running I/O for 2 seconds... 00:18:19.944 [2024-12-08 05:17:09.472518] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:19.944 [2024-12-08 05:17:09.472884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.944 [2024-12-08 05:17:09.472917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:19.945 [2024-12-08 05:17:09.477793] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:19.945 [2024-12-08 05:17:09.478111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.945 [2024-12-08 05:17:09.478144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:19.945 [2024-12-08 05:17:09.482998] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:19.945 [2024-12-08 05:17:09.483303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.945 [2024-12-08 05:17:09.483359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:19.945 [2024-12-08 05:17:09.488464] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:19.945 [2024-12-08 05:17:09.488813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.945 [2024-12-08 05:17:09.488846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.945 [2024-12-08 05:17:09.493661] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:19.945 [2024-12-08 05:17:09.493992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.945 [2024-12-08 05:17:09.494024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:19.945 [2024-12-08 05:17:09.498823] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:19.945 [2024-12-08 05:17:09.499138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.945 [2024-12-08 05:17:09.499169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:19.945 [2024-12-08 05:17:09.503978] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:19.945 [2024-12-08 05:17:09.504298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.945 [2024-12-08 05:17:09.504336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:19.945 [2024-12-08 05:17:09.509219] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:19.945 [2024-12-08 05:17:09.509538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.945 [2024-12-08 05:17:09.509570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.945 [2024-12-08 05:17:09.514488] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:19.945 [2024-12-08 05:17:09.514818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.945 [2024-12-08 05:17:09.514850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:19.945 [2024-12-08 05:17:09.519880] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:19.945 [2024-12-08 05:17:09.520198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.945 [2024-12-08 05:17:09.520229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:19.945 [2024-12-08 05:17:09.525074] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:19.945 [2024-12-08 05:17:09.525397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.945 [2024-12-08 05:17:09.525428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:19.945 [2024-12-08 05:17:09.530220] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:19.945 [2024-12-08 05:17:09.530527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.945 [2024-12-08 05:17:09.530558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.945 [2024-12-08 05:17:09.535369] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:19.945 [2024-12-08 05:17:09.535701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.945 [2024-12-08 05:17:09.535731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:19.945 [2024-12-08 05:17:09.540563] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:19.945 [2024-12-08 05:17:09.540881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.945 [2024-12-08 05:17:09.540911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:19.945 [2024-12-08 05:17:09.545659] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:19.945 [2024-12-08 05:17:09.545977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.945 [2024-12-08 05:17:09.546008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:19.945 [2024-12-08 05:17:09.550800] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:19.945 [2024-12-08 05:17:09.551105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.945 [2024-12-08 05:17:09.551135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.945 [2024-12-08 05:17:09.556034] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:19.945 [2024-12-08 05:17:09.556338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.945 [2024-12-08 05:17:09.556368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:19.945 [2024-12-08 05:17:09.561184] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:19.945 [2024-12-08 05:17:09.561487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.945 [2024-12-08 05:17:09.561517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:19.945 [2024-12-08 05:17:09.566305] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:19.945 [2024-12-08 05:17:09.566609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.945 [2024-12-08 05:17:09.566640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:19.945 [2024-12-08 05:17:09.571461] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:19.945 [2024-12-08 05:17:09.571783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.945 [2024-12-08 05:17:09.571813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.945 [2024-12-08 05:17:09.576531] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:19.945 [2024-12-08 05:17:09.576851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.945 [2024-12-08 05:17:09.576882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:19.945 [2024-12-08 05:17:09.581665] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:19.945 [2024-12-08 05:17:09.581980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.945 [2024-12-08 05:17:09.582010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:19.945 [2024-12-08 05:17:09.586777] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:19.946 [2024-12-08 05:17:09.587080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.946 [2024-12-08 05:17:09.587110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:19.946 [2024-12-08 05:17:09.591854] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:19.946 [2024-12-08 05:17:09.592253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.946 [2024-12-08 05:17:09.592297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.946 [2024-12-08 05:17:09.596809] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:19.946 [2024-12-08 05:17:09.596910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.946 [2024-12-08 05:17:09.596948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:19.946 [2024-12-08 05:17:09.601934] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:19.946 [2024-12-08 05:17:09.602038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.946 [2024-12-08 05:17:09.602074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:19.946 [2024-12-08 05:17:09.606959] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:19.946 [2024-12-08 05:17:09.607091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.946 [2024-12-08 05:17:09.607127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:19.946 [2024-12-08 05:17:09.612075] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:19.946 [2024-12-08 05:17:09.612173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.946 [2024-12-08 05:17:09.612208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.946 [2024-12-08 05:17:09.617181] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:19.946 [2024-12-08 05:17:09.617269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.946 [2024-12-08 05:17:09.617297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:19.946 [2024-12-08 05:17:09.622283] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:19.946 [2024-12-08 05:17:09.622371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.946 [2024-12-08 05:17:09.622397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:19.946 [2024-12-08 05:17:09.627307] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:19.946 [2024-12-08 05:17:09.627406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.946 [2024-12-08 05:17:09.627431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:19.946 [2024-12-08 05:17:09.632517] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:19.946 [2024-12-08 05:17:09.632608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.946 [2024-12-08 05:17:09.632634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.946 [2024-12-08 05:17:09.637882] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:19.946 [2024-12-08 05:17:09.637972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.946 [2024-12-08 05:17:09.637998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:19.946 [2024-12-08 05:17:09.642946] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:19.946 [2024-12-08 05:17:09.643034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.946 [2024-12-08 05:17:09.643058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:19.946 [2024-12-08 05:17:09.648008] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:19.946 [2024-12-08 05:17:09.648098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.946 [2024-12-08 05:17:09.648123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:19.946 [2024-12-08 05:17:09.653064] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:19.946 [2024-12-08 05:17:09.653151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.946 [2024-12-08 05:17:09.653176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.946 [2024-12-08 05:17:09.658144] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:19.946 [2024-12-08 05:17:09.658232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.946 [2024-12-08 05:17:09.658257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:19.946 [2024-12-08 05:17:09.663275] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:19.946 [2024-12-08 05:17:09.663361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.946 [2024-12-08 05:17:09.663399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:19.946 [2024-12-08 05:17:09.668474] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:19.946 [2024-12-08 05:17:09.668567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.946 [2024-12-08 05:17:09.668592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:19.946 [2024-12-08 05:17:09.673504] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:19.946 [2024-12-08 05:17:09.673596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.946 [2024-12-08 05:17:09.673622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.946 [2024-12-08 05:17:09.678506] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:19.946 [2024-12-08 05:17:09.678594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.946 [2024-12-08 05:17:09.678620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:19.946 [2024-12-08 05:17:09.683643] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:19.946 [2024-12-08 05:17:09.683753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.946 [2024-12-08 05:17:09.683779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:19.946 [2024-12-08 05:17:09.688868] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:19.946 [2024-12-08 05:17:09.688976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.946 [2024-12-08 05:17:09.689013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:19.946 [2024-12-08 05:17:09.694086] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:19.946 [2024-12-08 05:17:09.694175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.946 [2024-12-08 05:17:09.694201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.946 [2024-12-08 05:17:09.699356] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:19.946 [2024-12-08 05:17:09.699462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.946 [2024-12-08 05:17:09.699493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:19.946 [2024-12-08 05:17:09.704590] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:19.947 [2024-12-08 05:17:09.704699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.947 [2024-12-08 05:17:09.704726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:19.947 [2024-12-08 05:17:09.709744] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:19.947 [2024-12-08 05:17:09.709832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.947 [2024-12-08 05:17:09.709857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:19.947 [2024-12-08 05:17:09.715088] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:19.947 [2024-12-08 05:17:09.715189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.947 [2024-12-08 05:17:09.715219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.947 [2024-12-08 05:17:09.720200] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:19.947 [2024-12-08 05:17:09.720301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.947 [2024-12-08 05:17:09.720334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:19.947 [2024-12-08 05:17:09.725281] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:19.947 [2024-12-08 05:17:09.725371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.947 [2024-12-08 05:17:09.725397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.207 [2024-12-08 05:17:09.730399] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.207 [2024-12-08 05:17:09.730487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.207 [2024-12-08 05:17:09.730513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.207 [2024-12-08 05:17:09.735507] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.207 [2024-12-08 05:17:09.735606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.207 [2024-12-08 05:17:09.735642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.207 [2024-12-08 05:17:09.740688] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.207 [2024-12-08 05:17:09.740784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.207 [2024-12-08 05:17:09.740824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.207 [2024-12-08 05:17:09.745848] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.207 [2024-12-08 05:17:09.745951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.207 [2024-12-08 05:17:09.745993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.207 [2024-12-08 05:17:09.751049] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.207 [2024-12-08 05:17:09.751138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.208 [2024-12-08 05:17:09.751169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.208 [2024-12-08 05:17:09.756132] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.208 [2024-12-08 05:17:09.756220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.208 [2024-12-08 05:17:09.756256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.208 [2024-12-08 05:17:09.761206] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.208 [2024-12-08 05:17:09.761294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.208 [2024-12-08 05:17:09.761326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.208 [2024-12-08 05:17:09.766306] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.208 [2024-12-08 05:17:09.766413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.208 [2024-12-08 05:17:09.766444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.208 [2024-12-08 05:17:09.771517] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.208 [2024-12-08 05:17:09.771604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.208 [2024-12-08 05:17:09.771630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.208 [2024-12-08 05:17:09.776598] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.208 [2024-12-08 05:17:09.776700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.208 [2024-12-08 05:17:09.776724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.208 [2024-12-08 05:17:09.781691] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.208 [2024-12-08 05:17:09.781788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.208 [2024-12-08 05:17:09.781812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.208 [2024-12-08 05:17:09.786702] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.208 [2024-12-08 05:17:09.786788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.208 [2024-12-08 05:17:09.786812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.208 [2024-12-08 05:17:09.791840] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.208 [2024-12-08 05:17:09.791932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.208 [2024-12-08 05:17:09.791955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.208 [2024-12-08 05:17:09.796908] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.208 [2024-12-08 05:17:09.796995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.208 [2024-12-08 05:17:09.797018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.208 [2024-12-08 05:17:09.801974] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.208 [2024-12-08 05:17:09.802063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.208 [2024-12-08 05:17:09.802087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.208 [2024-12-08 05:17:09.806976] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.208 [2024-12-08 05:17:09.807071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.208 [2024-12-08 05:17:09.807103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.208 [2024-12-08 05:17:09.812127] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.208 [2024-12-08 05:17:09.812235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.208 [2024-12-08 05:17:09.812265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.208 [2024-12-08 05:17:09.817274] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.208 [2024-12-08 05:17:09.817386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.208 [2024-12-08 05:17:09.817417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.208 [2024-12-08 05:17:09.822409] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.208 [2024-12-08 05:17:09.822517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.208 [2024-12-08 05:17:09.822545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.208 [2024-12-08 05:17:09.827515] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.208 [2024-12-08 05:17:09.827601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.208 [2024-12-08 05:17:09.827625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.208 [2024-12-08 05:17:09.832621] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.208 [2024-12-08 05:17:09.832741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.208 [2024-12-08 05:17:09.832772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.208 [2024-12-08 05:17:09.837752] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.208 [2024-12-08 05:17:09.837870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.208 [2024-12-08 05:17:09.837899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.208 [2024-12-08 05:17:09.842939] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.208 [2024-12-08 05:17:09.843055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.208 [2024-12-08 05:17:09.843085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.208 [2024-12-08 05:17:09.848036] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.208 [2024-12-08 05:17:09.848154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.208 [2024-12-08 05:17:09.848185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.208 [2024-12-08 05:17:09.853181] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.208 [2024-12-08 05:17:09.853294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.208 [2024-12-08 05:17:09.853324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.208 [2024-12-08 05:17:09.858300] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.208 [2024-12-08 05:17:09.858417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.208 [2024-12-08 05:17:09.858447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.208 [2024-12-08 05:17:09.863478] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.208 [2024-12-08 05:17:09.863597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.208 [2024-12-08 05:17:09.863628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.208 [2024-12-08 05:17:09.868659] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.209 [2024-12-08 05:17:09.868790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.209 [2024-12-08 05:17:09.868821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.209 [2024-12-08 05:17:09.873851] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.209 [2024-12-08 05:17:09.873955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.209 [2024-12-08 05:17:09.873983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.209 [2024-12-08 05:17:09.878908] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.209 [2024-12-08 05:17:09.879019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.209 [2024-12-08 05:17:09.879050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.209 [2024-12-08 05:17:09.884064] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.209 [2024-12-08 05:17:09.884174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.209 [2024-12-08 05:17:09.884205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.209 [2024-12-08 05:17:09.889256] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.209 [2024-12-08 05:17:09.889369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.209 [2024-12-08 05:17:09.889398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.209 [2024-12-08 05:17:09.894405] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.209 [2024-12-08 05:17:09.894501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.209 [2024-12-08 05:17:09.894526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.209 [2024-12-08 05:17:09.899498] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.209 [2024-12-08 05:17:09.899586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.209 [2024-12-08 05:17:09.899610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.209 [2024-12-08 05:17:09.904594] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.209 [2024-12-08 05:17:09.904698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.209 [2024-12-08 05:17:09.904723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.209 [2024-12-08 05:17:09.909712] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.209 [2024-12-08 05:17:09.909808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.209 [2024-12-08 05:17:09.909832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.209 [2024-12-08 05:17:09.914833] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.209 [2024-12-08 05:17:09.914936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.209 [2024-12-08 05:17:09.914963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.209 [2024-12-08 05:17:09.919968] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.209 [2024-12-08 05:17:09.920087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.209 [2024-12-08 05:17:09.920117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.209 [2024-12-08 05:17:09.925170] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.209 [2024-12-08 05:17:09.925273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.209 [2024-12-08 05:17:09.925301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.209 [2024-12-08 05:17:09.930284] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.209 [2024-12-08 05:17:09.930374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.209 [2024-12-08 05:17:09.930399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.209 [2024-12-08 05:17:09.935384] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.209 [2024-12-08 05:17:09.935471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.209 [2024-12-08 05:17:09.935496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.209 [2024-12-08 05:17:09.940483] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.209 [2024-12-08 05:17:09.940575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.209 [2024-12-08 05:17:09.940598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.209 [2024-12-08 05:17:09.945609] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.209 [2024-12-08 05:17:09.945721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.209 [2024-12-08 05:17:09.945746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.209 [2024-12-08 05:17:09.950711] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.209 [2024-12-08 05:17:09.950800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.209 [2024-12-08 05:17:09.950823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.209 [2024-12-08 05:17:09.955898] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.209 [2024-12-08 05:17:09.955990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.209 [2024-12-08 05:17:09.956021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.209 [2024-12-08 05:17:09.960999] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.209 [2024-12-08 05:17:09.961088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.209 [2024-12-08 05:17:09.961113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.209 [2024-12-08 05:17:09.966116] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.209 [2024-12-08 05:17:09.966221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.209 [2024-12-08 05:17:09.966251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.209 [2024-12-08 05:17:09.971300] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.209 [2024-12-08 05:17:09.971430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.209 [2024-12-08 05:17:09.971461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.209 [2024-12-08 05:17:09.976457] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.209 [2024-12-08 05:17:09.976573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.209 [2024-12-08 05:17:09.976601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.209 [2024-12-08 05:17:09.981611] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.209 [2024-12-08 05:17:09.981742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.209 [2024-12-08 05:17:09.981771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.209 [2024-12-08 05:17:09.986828] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.209 [2024-12-08 05:17:09.986936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.210 [2024-12-08 05:17:09.986967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.470 [2024-12-08 05:17:09.992028] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.470 [2024-12-08 05:17:09.992142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.470 [2024-12-08 05:17:09.992172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.470 [2024-12-08 05:17:09.997141] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.470 [2024-12-08 05:17:09.997254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.470 [2024-12-08 05:17:09.997282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.470 [2024-12-08 05:17:10.002427] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.470 [2024-12-08 05:17:10.002539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.470 [2024-12-08 05:17:10.002571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.470 [2024-12-08 05:17:10.007898] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.470 [2024-12-08 05:17:10.008004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.470 [2024-12-08 05:17:10.008032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.470 [2024-12-08 05:17:10.013101] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.470 [2024-12-08 05:17:10.013206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.470 [2024-12-08 05:17:10.013234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.470 [2024-12-08 05:17:10.019973] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.470 [2024-12-08 05:17:10.020125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.470 [2024-12-08 05:17:10.020155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.470 [2024-12-08 05:17:10.025465] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.470 [2024-12-08 05:17:10.025574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.470 [2024-12-08 05:17:10.025603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.470 [2024-12-08 05:17:10.031295] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.470 [2024-12-08 05:17:10.031437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.470 [2024-12-08 05:17:10.031467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.470 [2024-12-08 05:17:10.037601] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.470 [2024-12-08 05:17:10.037715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.470 [2024-12-08 05:17:10.037749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.470 [2024-12-08 05:17:10.043001] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.470 [2024-12-08 05:17:10.043091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.470 [2024-12-08 05:17:10.043116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.470 [2024-12-08 05:17:10.048244] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.470 [2024-12-08 05:17:10.048336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.470 [2024-12-08 05:17:10.048362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.470 [2024-12-08 05:17:10.053397] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.470 [2024-12-08 05:17:10.053485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.470 [2024-12-08 05:17:10.053508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.470 [2024-12-08 05:17:10.058643] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.470 [2024-12-08 05:17:10.058754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.470 [2024-12-08 05:17:10.058778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.470 [2024-12-08 05:17:10.063828] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.470 [2024-12-08 05:17:10.063932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.470 [2024-12-08 05:17:10.063956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.471 [2024-12-08 05:17:10.069170] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.471 [2024-12-08 05:17:10.069260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.471 [2024-12-08 05:17:10.069284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.471 [2024-12-08 05:17:10.074333] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.471 [2024-12-08 05:17:10.074425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.471 [2024-12-08 05:17:10.074448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.471 [2024-12-08 05:17:10.079727] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.471 [2024-12-08 05:17:10.079827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.471 [2024-12-08 05:17:10.079853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.471 [2024-12-08 05:17:10.085166] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.471 [2024-12-08 05:17:10.085261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.471 [2024-12-08 05:17:10.085286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.471 [2024-12-08 05:17:10.091343] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.471 [2024-12-08 05:17:10.091491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.471 [2024-12-08 05:17:10.091521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.471 [2024-12-08 05:17:10.097493] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.471 [2024-12-08 05:17:10.097580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.471 [2024-12-08 05:17:10.097607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.471 [2024-12-08 05:17:10.102883] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.471 [2024-12-08 05:17:10.102972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.471 [2024-12-08 05:17:10.102997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.471 [2024-12-08 05:17:10.108778] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.471 [2024-12-08 05:17:10.108873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.471 [2024-12-08 05:17:10.108908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.471 [2024-12-08 05:17:10.114007] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.471 [2024-12-08 05:17:10.114093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.471 [2024-12-08 05:17:10.114120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.471 [2024-12-08 05:17:10.119389] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.471 [2024-12-08 05:17:10.119481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.471 [2024-12-08 05:17:10.119506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.471 [2024-12-08 05:17:10.124541] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.471 [2024-12-08 05:17:10.124633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.471 [2024-12-08 05:17:10.124657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.471 [2024-12-08 05:17:10.129638] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.471 [2024-12-08 05:17:10.129815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.471 [2024-12-08 05:17:10.129840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.471 [2024-12-08 05:17:10.134839] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.471 [2024-12-08 05:17:10.134928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.471 [2024-12-08 05:17:10.134957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.471 [2024-12-08 05:17:10.140297] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.471 [2024-12-08 05:17:10.140389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.471 [2024-12-08 05:17:10.140413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.471 [2024-12-08 05:17:10.145531] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.471 [2024-12-08 05:17:10.145617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.471 [2024-12-08 05:17:10.145641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.471 [2024-12-08 05:17:10.151601] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.471 [2024-12-08 05:17:10.151703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.471 [2024-12-08 05:17:10.151727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.471 [2024-12-08 05:17:10.157319] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.471 [2024-12-08 05:17:10.157417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.471 [2024-12-08 05:17:10.157441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.471 [2024-12-08 05:17:10.162550] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.471 [2024-12-08 05:17:10.162637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.471 [2024-12-08 05:17:10.162661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.471 [2024-12-08 05:17:10.167719] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.471 [2024-12-08 05:17:10.167806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.471 [2024-12-08 05:17:10.167839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.471 [2024-12-08 05:17:10.172878] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.471 [2024-12-08 05:17:10.172966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.471 [2024-12-08 05:17:10.172990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.471 [2024-12-08 05:17:10.178015] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.471 [2024-12-08 05:17:10.178104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.471 [2024-12-08 05:17:10.178129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.471 [2024-12-08 05:17:10.183142] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.471 [2024-12-08 05:17:10.183232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.471 [2024-12-08 05:17:10.183257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.471 [2024-12-08 05:17:10.188482] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.471 [2024-12-08 05:17:10.188576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.471 [2024-12-08 05:17:10.188601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.471 [2024-12-08 05:17:10.193720] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.471 [2024-12-08 05:17:10.193814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.471 [2024-12-08 05:17:10.193850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.471 [2024-12-08 05:17:10.198935] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.471 [2024-12-08 05:17:10.199029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.472 [2024-12-08 05:17:10.199054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.472 [2024-12-08 05:17:10.204103] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.472 [2024-12-08 05:17:10.204192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.472 [2024-12-08 05:17:10.204215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.472 [2024-12-08 05:17:10.209302] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.472 [2024-12-08 05:17:10.209398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.472 [2024-12-08 05:17:10.209422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.472 [2024-12-08 05:17:10.214531] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.472 [2024-12-08 05:17:10.214622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.472 [2024-12-08 05:17:10.214645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.472 [2024-12-08 05:17:10.219671] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.472 [2024-12-08 05:17:10.219777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.472 [2024-12-08 05:17:10.219803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.472 [2024-12-08 05:17:10.224790] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.472 [2024-12-08 05:17:10.224877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.472 [2024-12-08 05:17:10.224901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.472 [2024-12-08 05:17:10.229900] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.472 [2024-12-08 05:17:10.229993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.472 [2024-12-08 05:17:10.230017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.472 [2024-12-08 05:17:10.235026] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.472 [2024-12-08 05:17:10.235113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.472 [2024-12-08 05:17:10.235144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.472 [2024-12-08 05:17:10.240331] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.472 [2024-12-08 05:17:10.240420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.472 [2024-12-08 05:17:10.240445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.472 [2024-12-08 05:17:10.245415] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.472 [2024-12-08 05:17:10.245507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.472 [2024-12-08 05:17:10.245531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.472 [2024-12-08 05:17:10.250562] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.472 [2024-12-08 05:17:10.250650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.472 [2024-12-08 05:17:10.250689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.733 [2024-12-08 05:17:10.255627] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.733 [2024-12-08 05:17:10.255732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.733 [2024-12-08 05:17:10.255756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.733 [2024-12-08 05:17:10.260812] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.733 [2024-12-08 05:17:10.260913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.733 [2024-12-08 05:17:10.260942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.733 [2024-12-08 05:17:10.265895] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.733 [2024-12-08 05:17:10.265983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.733 [2024-12-08 05:17:10.266008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.733 [2024-12-08 05:17:10.270987] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.733 [2024-12-08 05:17:10.271074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.733 [2024-12-08 05:17:10.271098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.733 [2024-12-08 05:17:10.276116] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.733 [2024-12-08 05:17:10.276211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.733 [2024-12-08 05:17:10.276235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.733 [2024-12-08 05:17:10.281250] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.733 [2024-12-08 05:17:10.281336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.733 [2024-12-08 05:17:10.281361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.733 [2024-12-08 05:17:10.286318] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.733 [2024-12-08 05:17:10.286406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.733 [2024-12-08 05:17:10.286430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.733 [2024-12-08 05:17:10.291412] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.733 [2024-12-08 05:17:10.291503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.733 [2024-12-08 05:17:10.291527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.733 [2024-12-08 05:17:10.296514] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.733 [2024-12-08 05:17:10.296600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.733 [2024-12-08 05:17:10.296624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.733 [2024-12-08 05:17:10.301611] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.733 [2024-12-08 05:17:10.301713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.733 [2024-12-08 05:17:10.301737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.733 [2024-12-08 05:17:10.306704] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.733 [2024-12-08 05:17:10.306797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.733 [2024-12-08 05:17:10.306821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.733 [2024-12-08 05:17:10.311807] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.733 [2024-12-08 05:17:10.311896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.733 [2024-12-08 05:17:10.311920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.733 [2024-12-08 05:17:10.316868] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.733 [2024-12-08 05:17:10.316955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.733 [2024-12-08 05:17:10.316991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.733 [2024-12-08 05:17:10.321942] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.733 [2024-12-08 05:17:10.322030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.733 [2024-12-08 05:17:10.322054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.733 [2024-12-08 05:17:10.326933] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.733 [2024-12-08 05:17:10.327023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.733 [2024-12-08 05:17:10.327048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.733 [2024-12-08 05:17:10.331996] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.733 [2024-12-08 05:17:10.332095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.733 [2024-12-08 05:17:10.332120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.733 [2024-12-08 05:17:10.337098] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.733 [2024-12-08 05:17:10.337187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.733 [2024-12-08 05:17:10.337211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.733 [2024-12-08 05:17:10.342133] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.733 [2024-12-08 05:17:10.342217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.733 [2024-12-08 05:17:10.342240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.733 [2024-12-08 05:17:10.347152] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.733 [2024-12-08 05:17:10.347248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.733 [2024-12-08 05:17:10.347271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.733 [2024-12-08 05:17:10.352229] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.733 [2024-12-08 05:17:10.352316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.733 [2024-12-08 05:17:10.352340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.733 [2024-12-08 05:17:10.357320] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.733 [2024-12-08 05:17:10.357415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.733 [2024-12-08 05:17:10.357439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.733 [2024-12-08 05:17:10.362358] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.734 [2024-12-08 05:17:10.362447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.734 [2024-12-08 05:17:10.362472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.734 [2024-12-08 05:17:10.367572] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.734 [2024-12-08 05:17:10.367662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.734 [2024-12-08 05:17:10.367706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.734 [2024-12-08 05:17:10.372702] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.734 [2024-12-08 05:17:10.372792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.734 [2024-12-08 05:17:10.372818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.734 [2024-12-08 05:17:10.377703] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.734 [2024-12-08 05:17:10.377794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.734 [2024-12-08 05:17:10.377821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.734 [2024-12-08 05:17:10.382827] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.734 [2024-12-08 05:17:10.382915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.734 [2024-12-08 05:17:10.382941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.734 [2024-12-08 05:17:10.387921] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.734 [2024-12-08 05:17:10.388028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.734 [2024-12-08 05:17:10.388064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.734 [2024-12-08 05:17:10.393010] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.734 [2024-12-08 05:17:10.393099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.734 [2024-12-08 05:17:10.393124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.734 [2024-12-08 05:17:10.398130] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.734 [2024-12-08 05:17:10.398217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.734 [2024-12-08 05:17:10.398244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.734 [2024-12-08 05:17:10.403273] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.734 [2024-12-08 05:17:10.403363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.734 [2024-12-08 05:17:10.403402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.734 [2024-12-08 05:17:10.409258] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.734 [2024-12-08 05:17:10.409352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.734 [2024-12-08 05:17:10.409379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.734 [2024-12-08 05:17:10.415138] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.734 [2024-12-08 05:17:10.415270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.734 [2024-12-08 05:17:10.415308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.734 [2024-12-08 05:17:10.421867] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.734 [2024-12-08 05:17:10.421982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.734 [2024-12-08 05:17:10.422008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.734 [2024-12-08 05:17:10.428086] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.734 [2024-12-08 05:17:10.428200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.734 [2024-12-08 05:17:10.428226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.734 [2024-12-08 05:17:10.433936] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.734 [2024-12-08 05:17:10.434030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.734 [2024-12-08 05:17:10.434057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.734 [2024-12-08 05:17:10.439922] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.734 [2024-12-08 05:17:10.440017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.734 [2024-12-08 05:17:10.440043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.734 [2024-12-08 05:17:10.444953] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.734 [2024-12-08 05:17:10.445041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.734 [2024-12-08 05:17:10.445066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.734 [2024-12-08 05:17:10.450295] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.734 [2024-12-08 05:17:10.450383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.734 [2024-12-08 05:17:10.450406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.734 [2024-12-08 05:17:10.455385] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.734 [2024-12-08 05:17:10.455484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.734 [2024-12-08 05:17:10.455509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.734 [2024-12-08 05:17:10.460786] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.734 [2024-12-08 05:17:10.460874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.734 [2024-12-08 05:17:10.460898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.734 [2024-12-08 05:17:10.465789] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.734 [2024-12-08 05:17:10.465880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.734 [2024-12-08 05:17:10.465903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.734 [2024-12-08 05:17:10.471084] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.734 [2024-12-08 05:17:10.471173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.734 [2024-12-08 05:17:10.471196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.734 [2024-12-08 05:17:10.476241] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.734 [2024-12-08 05:17:10.476328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.734 [2024-12-08 05:17:10.476351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.734 [2024-12-08 05:17:10.481507] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.734 [2024-12-08 05:17:10.481597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.734 [2024-12-08 05:17:10.481622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.734 [2024-12-08 05:17:10.486623] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.734 [2024-12-08 05:17:10.486747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.734 [2024-12-08 05:17:10.486778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.734 [2024-12-08 05:17:10.491945] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.734 [2024-12-08 05:17:10.492031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.734 [2024-12-08 05:17:10.492055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.734 [2024-12-08 05:17:10.497092] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.734 [2024-12-08 05:17:10.497184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.734 [2024-12-08 05:17:10.497208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.734 [2024-12-08 05:17:10.503226] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.734 [2024-12-08 05:17:10.503339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.734 [2024-12-08 05:17:10.503363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.734 [2024-12-08 05:17:10.508825] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.734 [2024-12-08 05:17:10.508916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.734 [2024-12-08 05:17:10.508940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.734 [2024-12-08 05:17:10.514555] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.734 [2024-12-08 05:17:10.514669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.734 [2024-12-08 05:17:10.514730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.994 [2024-12-08 05:17:10.520410] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.994 [2024-12-08 05:17:10.520502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.994 [2024-12-08 05:17:10.520526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.994 [2024-12-08 05:17:10.525646] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.994 [2024-12-08 05:17:10.525752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.994 [2024-12-08 05:17:10.525776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.994 [2024-12-08 05:17:10.531665] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.994 [2024-12-08 05:17:10.531773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.994 [2024-12-08 05:17:10.531798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.994 [2024-12-08 05:17:10.536966] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.994 [2024-12-08 05:17:10.537055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.994 [2024-12-08 05:17:10.537078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.994 [2024-12-08 05:17:10.542482] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.994 [2024-12-08 05:17:10.542575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.994 [2024-12-08 05:17:10.542599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.994 [2024-12-08 05:17:10.548781] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.994 [2024-12-08 05:17:10.548876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.994 [2024-12-08 05:17:10.548900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.994 [2024-12-08 05:17:10.554258] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.994 [2024-12-08 05:17:10.554363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.994 [2024-12-08 05:17:10.554388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.994 [2024-12-08 05:17:10.559884] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.994 [2024-12-08 05:17:10.559972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.994 [2024-12-08 05:17:10.559997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.994 [2024-12-08 05:17:10.565202] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.994 [2024-12-08 05:17:10.565295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.994 [2024-12-08 05:17:10.565318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.994 [2024-12-08 05:17:10.571849] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.994 [2024-12-08 05:17:10.571980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.994 [2024-12-08 05:17:10.572003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.994 [2024-12-08 05:17:10.579429] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.994 [2024-12-08 05:17:10.579547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.994 [2024-12-08 05:17:10.579576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.994 [2024-12-08 05:17:10.587062] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.994 [2024-12-08 05:17:10.587172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.994 [2024-12-08 05:17:10.587198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.994 [2024-12-08 05:17:10.594637] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.994 [2024-12-08 05:17:10.594776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.994 [2024-12-08 05:17:10.594803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.994 [2024-12-08 05:17:10.602090] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.994 [2024-12-08 05:17:10.602204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.994 [2024-12-08 05:17:10.602230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.994 [2024-12-08 05:17:10.609665] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.994 [2024-12-08 05:17:10.609799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.994 [2024-12-08 05:17:10.609825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.994 [2024-12-08 05:17:10.616661] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.994 [2024-12-08 05:17:10.616800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.994 [2024-12-08 05:17:10.616827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.994 [2024-12-08 05:17:10.623598] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.994 [2024-12-08 05:17:10.623732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.994 [2024-12-08 05:17:10.623758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.994 [2024-12-08 05:17:10.631254] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.994 [2024-12-08 05:17:10.631390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.994 [2024-12-08 05:17:10.631416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.994 [2024-12-08 05:17:10.638733] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.994 [2024-12-08 05:17:10.638850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.994 [2024-12-08 05:17:10.638876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.995 [2024-12-08 05:17:10.645660] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.995 [2024-12-08 05:17:10.645799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.995 [2024-12-08 05:17:10.645825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.995 [2024-12-08 05:17:10.653164] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.995 [2024-12-08 05:17:10.653281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.995 [2024-12-08 05:17:10.653306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.995 [2024-12-08 05:17:10.660417] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.995 [2024-12-08 05:17:10.660527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.995 [2024-12-08 05:17:10.660553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.995 [2024-12-08 05:17:10.667749] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.995 [2024-12-08 05:17:10.667862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.995 [2024-12-08 05:17:10.667887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.995 [2024-12-08 05:17:10.675125] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.995 [2024-12-08 05:17:10.675238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.995 [2024-12-08 05:17:10.675264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.995 [2024-12-08 05:17:10.682550] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.995 [2024-12-08 05:17:10.682663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.995 [2024-12-08 05:17:10.682705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.995 [2024-12-08 05:17:10.687895] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.995 [2024-12-08 05:17:10.687983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.995 [2024-12-08 05:17:10.688008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.995 [2024-12-08 05:17:10.692997] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.995 [2024-12-08 05:17:10.693084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.995 [2024-12-08 05:17:10.693109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.995 [2024-12-08 05:17:10.698098] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.995 [2024-12-08 05:17:10.698191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.995 [2024-12-08 05:17:10.698216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.995 [2024-12-08 05:17:10.703219] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.995 [2024-12-08 05:17:10.703306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.995 [2024-12-08 05:17:10.703331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.995 [2024-12-08 05:17:10.708358] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.995 [2024-12-08 05:17:10.708446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.995 [2024-12-08 05:17:10.708471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.995 [2024-12-08 05:17:10.713449] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.995 [2024-12-08 05:17:10.713538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.995 [2024-12-08 05:17:10.713563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.995 [2024-12-08 05:17:10.718573] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.995 [2024-12-08 05:17:10.718659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.995 [2024-12-08 05:17:10.718701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.995 [2024-12-08 05:17:10.723712] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.995 [2024-12-08 05:17:10.723801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.995 [2024-12-08 05:17:10.723826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.995 [2024-12-08 05:17:10.728828] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.995 [2024-12-08 05:17:10.728918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.995 [2024-12-08 05:17:10.728945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.995 [2024-12-08 05:17:10.733975] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.995 [2024-12-08 05:17:10.734061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.995 [2024-12-08 05:17:10.734087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.995 [2024-12-08 05:17:10.739140] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.995 [2024-12-08 05:17:10.739233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.995 [2024-12-08 05:17:10.739259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.995 [2024-12-08 05:17:10.744319] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.995 [2024-12-08 05:17:10.744408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.995 [2024-12-08 05:17:10.744433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.995 [2024-12-08 05:17:10.749454] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.995 [2024-12-08 05:17:10.749549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.995 [2024-12-08 05:17:10.749574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.995 [2024-12-08 05:17:10.754602] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.995 [2024-12-08 05:17:10.754703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.995 [2024-12-08 05:17:10.754729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.995 [2024-12-08 05:17:10.759753] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.995 [2024-12-08 05:17:10.759843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.995 [2024-12-08 05:17:10.759868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.995 [2024-12-08 05:17:10.764843] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.995 [2024-12-08 05:17:10.764932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.995 [2024-12-08 05:17:10.764957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.996 [2024-12-08 05:17:10.769949] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.996 [2024-12-08 05:17:10.770038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.996 [2024-12-08 05:17:10.770062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.996 [2024-12-08 05:17:10.775039] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:20.996 [2024-12-08 05:17:10.775127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.996 [2024-12-08 05:17:10.775152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.255 [2024-12-08 05:17:10.780140] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.255 [2024-12-08 05:17:10.780227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.255 [2024-12-08 05:17:10.780253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.255 [2024-12-08 05:17:10.785263] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.255 [2024-12-08 05:17:10.785351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.255 [2024-12-08 05:17:10.785376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.255 [2024-12-08 05:17:10.790389] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.255 [2024-12-08 05:17:10.790481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.255 [2024-12-08 05:17:10.790507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.255 [2024-12-08 05:17:10.795494] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.255 [2024-12-08 05:17:10.795581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.255 [2024-12-08 05:17:10.795606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.255 [2024-12-08 05:17:10.800588] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.255 [2024-12-08 05:17:10.800690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.255 [2024-12-08 05:17:10.800715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.255 [2024-12-08 05:17:10.805654] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.255 [2024-12-08 05:17:10.805754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.255 [2024-12-08 05:17:10.805779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.256 [2024-12-08 05:17:10.810792] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.256 [2024-12-08 05:17:10.810881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.256 [2024-12-08 05:17:10.810906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.256 [2024-12-08 05:17:10.815920] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.256 [2024-12-08 05:17:10.816008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.256 [2024-12-08 05:17:10.816033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.256 [2024-12-08 05:17:10.821017] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.256 [2024-12-08 05:17:10.821105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.256 [2024-12-08 05:17:10.821130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.256 [2024-12-08 05:17:10.826135] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.256 [2024-12-08 05:17:10.826224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.256 [2024-12-08 05:17:10.826248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.256 [2024-12-08 05:17:10.831220] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.256 [2024-12-08 05:17:10.831307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.256 [2024-12-08 05:17:10.831337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.256 [2024-12-08 05:17:10.836358] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.256 [2024-12-08 05:17:10.836443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.256 [2024-12-08 05:17:10.836468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.256 [2024-12-08 05:17:10.841482] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.256 [2024-12-08 05:17:10.841570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.256 [2024-12-08 05:17:10.841595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.256 [2024-12-08 05:17:10.846559] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.256 [2024-12-08 05:17:10.846642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.256 [2024-12-08 05:17:10.846667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.256 [2024-12-08 05:17:10.851613] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.256 [2024-12-08 05:17:10.851717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.256 [2024-12-08 05:17:10.851744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.256 [2024-12-08 05:17:10.856740] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.256 [2024-12-08 05:17:10.856829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.256 [2024-12-08 05:17:10.856854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.256 [2024-12-08 05:17:10.861821] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.256 [2024-12-08 05:17:10.861909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.256 [2024-12-08 05:17:10.861934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.256 [2024-12-08 05:17:10.866883] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.256 [2024-12-08 05:17:10.866967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.256 [2024-12-08 05:17:10.866992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.256 [2024-12-08 05:17:10.872034] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.256 [2024-12-08 05:17:10.872121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.256 [2024-12-08 05:17:10.872146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.256 [2024-12-08 05:17:10.877140] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.256 [2024-12-08 05:17:10.877227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.256 [2024-12-08 05:17:10.877252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.256 [2024-12-08 05:17:10.882230] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.256 [2024-12-08 05:17:10.882314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.256 [2024-12-08 05:17:10.882340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.256 [2024-12-08 05:17:10.887335] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.256 [2024-12-08 05:17:10.887435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.256 [2024-12-08 05:17:10.887460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.256 [2024-12-08 05:17:10.892417] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.256 [2024-12-08 05:17:10.892504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.256 [2024-12-08 05:17:10.892530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.256 [2024-12-08 05:17:10.897522] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.256 [2024-12-08 05:17:10.897610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.256 [2024-12-08 05:17:10.897635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.256 [2024-12-08 05:17:10.902591] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.256 [2024-12-08 05:17:10.902692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.256 [2024-12-08 05:17:10.902718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.256 [2024-12-08 05:17:10.907667] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.256 [2024-12-08 05:17:10.907764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.256 [2024-12-08 05:17:10.907789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.256 [2024-12-08 05:17:10.912748] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.256 [2024-12-08 05:17:10.912837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.257 [2024-12-08 05:17:10.912863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.257 [2024-12-08 05:17:10.917803] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.257 [2024-12-08 05:17:10.917890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.257 [2024-12-08 05:17:10.917916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.257 [2024-12-08 05:17:10.922878] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.257 [2024-12-08 05:17:10.922965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.257 [2024-12-08 05:17:10.922990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.257 [2024-12-08 05:17:10.927968] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.257 [2024-12-08 05:17:10.928056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.257 [2024-12-08 05:17:10.928082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.257 [2024-12-08 05:17:10.933045] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.257 [2024-12-08 05:17:10.933137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.257 [2024-12-08 05:17:10.933161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.257 [2024-12-08 05:17:10.938114] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.257 [2024-12-08 05:17:10.938203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.257 [2024-12-08 05:17:10.938227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.257 [2024-12-08 05:17:10.943216] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.257 [2024-12-08 05:17:10.943310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.257 [2024-12-08 05:17:10.943343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.257 [2024-12-08 05:17:10.948339] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.257 [2024-12-08 05:17:10.948426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.257 [2024-12-08 05:17:10.948453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.257 [2024-12-08 05:17:10.953494] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.257 [2024-12-08 05:17:10.953584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.257 [2024-12-08 05:17:10.953609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.257 [2024-12-08 05:17:10.958612] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.257 [2024-12-08 05:17:10.958713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.257 [2024-12-08 05:17:10.958737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.257 [2024-12-08 05:17:10.963702] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.257 [2024-12-08 05:17:10.963786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.257 [2024-12-08 05:17:10.963812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.257 [2024-12-08 05:17:10.968736] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.257 [2024-12-08 05:17:10.968825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.257 [2024-12-08 05:17:10.968850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.257 [2024-12-08 05:17:10.973844] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.257 [2024-12-08 05:17:10.973930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.257 [2024-12-08 05:17:10.973955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.257 [2024-12-08 05:17:10.978902] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.257 [2024-12-08 05:17:10.978991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.257 [2024-12-08 05:17:10.979015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.257 [2024-12-08 05:17:10.984007] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.257 [2024-12-08 05:17:10.984092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.257 [2024-12-08 05:17:10.984117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.257 [2024-12-08 05:17:10.989078] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.257 [2024-12-08 05:17:10.989167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.257 [2024-12-08 05:17:10.989192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.257 [2024-12-08 05:17:10.994195] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.257 [2024-12-08 05:17:10.994282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.257 [2024-12-08 05:17:10.994307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.257 [2024-12-08 05:17:10.999274] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.257 [2024-12-08 05:17:10.999371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.257 [2024-12-08 05:17:10.999406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.257 [2024-12-08 05:17:11.004399] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.257 [2024-12-08 05:17:11.004487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.257 [2024-12-08 05:17:11.004513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.257 [2024-12-08 05:17:11.009494] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.257 [2024-12-08 05:17:11.009582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.257 [2024-12-08 05:17:11.009606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.257 [2024-12-08 05:17:11.014531] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.257 [2024-12-08 05:17:11.014617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.257 [2024-12-08 05:17:11.014642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.258 [2024-12-08 05:17:11.019572] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.258 [2024-12-08 05:17:11.019661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.258 [2024-12-08 05:17:11.019702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.258 [2024-12-08 05:17:11.024691] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.258 [2024-12-08 05:17:11.024797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.258 [2024-12-08 05:17:11.024826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.258 [2024-12-08 05:17:11.029762] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.258 [2024-12-08 05:17:11.029851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.258 [2024-12-08 05:17:11.029876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.258 [2024-12-08 05:17:11.034822] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.258 [2024-12-08 05:17:11.034910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.258 [2024-12-08 05:17:11.034936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.518 [2024-12-08 05:17:11.039970] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.518 [2024-12-08 05:17:11.040060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.518 [2024-12-08 05:17:11.040084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.518 [2024-12-08 05:17:11.045031] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.518 [2024-12-08 05:17:11.045119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.518 [2024-12-08 05:17:11.045144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.518 [2024-12-08 05:17:11.050153] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.519 [2024-12-08 05:17:11.050243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.519 [2024-12-08 05:17:11.050267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.519 [2024-12-08 05:17:11.055265] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.519 [2024-12-08 05:17:11.055362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.519 [2024-12-08 05:17:11.055401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.519 [2024-12-08 05:17:11.060411] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.519 [2024-12-08 05:17:11.060497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.519 [2024-12-08 05:17:11.060524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.519 [2024-12-08 05:17:11.065546] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.519 [2024-12-08 05:17:11.065650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.519 [2024-12-08 05:17:11.065693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.519 [2024-12-08 05:17:11.070716] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.519 [2024-12-08 05:17:11.070815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.519 [2024-12-08 05:17:11.070844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.519 [2024-12-08 05:17:11.075896] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.519 [2024-12-08 05:17:11.075996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.519 [2024-12-08 05:17:11.076026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.519 [2024-12-08 05:17:11.081008] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.519 [2024-12-08 05:17:11.081112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.519 [2024-12-08 05:17:11.081142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.519 [2024-12-08 05:17:11.086127] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.519 [2024-12-08 05:17:11.086231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.519 [2024-12-08 05:17:11.086261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.519 [2024-12-08 05:17:11.091244] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.519 [2024-12-08 05:17:11.091348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.519 [2024-12-08 05:17:11.091385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.519 [2024-12-08 05:17:11.096375] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.519 [2024-12-08 05:17:11.096468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.519 [2024-12-08 05:17:11.096494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.519 [2024-12-08 05:17:11.101506] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.519 [2024-12-08 05:17:11.101602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.519 [2024-12-08 05:17:11.101632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.519 [2024-12-08 05:17:11.106689] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.519 [2024-12-08 05:17:11.106792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.519 [2024-12-08 05:17:11.106823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.519 [2024-12-08 05:17:11.111909] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.519 [2024-12-08 05:17:11.112005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.519 [2024-12-08 05:17:11.112034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.519 [2024-12-08 05:17:11.117124] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.519 [2024-12-08 05:17:11.117214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.519 [2024-12-08 05:17:11.117241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.519 [2024-12-08 05:17:11.122272] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.519 [2024-12-08 05:17:11.122362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.519 [2024-12-08 05:17:11.122390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.519 [2024-12-08 05:17:11.127448] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.519 [2024-12-08 05:17:11.127538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.519 [2024-12-08 05:17:11.127564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.519 [2024-12-08 05:17:11.132583] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.519 [2024-12-08 05:17:11.132689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.519 [2024-12-08 05:17:11.132716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.519 [2024-12-08 05:17:11.137814] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.519 [2024-12-08 05:17:11.137901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.519 [2024-12-08 05:17:11.137928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.519 [2024-12-08 05:17:11.142921] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.519 [2024-12-08 05:17:11.143009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.519 [2024-12-08 05:17:11.143036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.519 [2024-12-08 05:17:11.148104] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.519 [2024-12-08 05:17:11.148216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.519 [2024-12-08 05:17:11.148254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.519 [2024-12-08 05:17:11.153470] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.519 [2024-12-08 05:17:11.153559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.519 [2024-12-08 05:17:11.153586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.519 [2024-12-08 05:17:11.158937] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.519 [2024-12-08 05:17:11.159045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.519 [2024-12-08 05:17:11.159072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.519 [2024-12-08 05:17:11.164584] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.519 [2024-12-08 05:17:11.164696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.519 [2024-12-08 05:17:11.164729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.519 [2024-12-08 05:17:11.169822] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.519 [2024-12-08 05:17:11.169927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.519 [2024-12-08 05:17:11.169961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.519 [2024-12-08 05:17:11.175033] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.519 [2024-12-08 05:17:11.175131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.519 [2024-12-08 05:17:11.175158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.519 [2024-12-08 05:17:11.180252] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.519 [2024-12-08 05:17:11.180340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.519 [2024-12-08 05:17:11.180366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.519 [2024-12-08 05:17:11.185473] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.519 [2024-12-08 05:17:11.185564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.519 [2024-12-08 05:17:11.185597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.520 [2024-12-08 05:17:11.190613] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.520 [2024-12-08 05:17:11.190724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.520 [2024-12-08 05:17:11.190751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.520 [2024-12-08 05:17:11.195784] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.520 [2024-12-08 05:17:11.195875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.520 [2024-12-08 05:17:11.195901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.520 [2024-12-08 05:17:11.200985] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.520 [2024-12-08 05:17:11.201086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.520 [2024-12-08 05:17:11.201111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.520 [2024-12-08 05:17:11.206293] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.520 [2024-12-08 05:17:11.206387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.520 [2024-12-08 05:17:11.206413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.520 [2024-12-08 05:17:11.211425] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.520 [2024-12-08 05:17:11.211525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.520 [2024-12-08 05:17:11.211558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.520 [2024-12-08 05:17:11.216555] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.520 [2024-12-08 05:17:11.216643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.520 [2024-12-08 05:17:11.216668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.520 [2024-12-08 05:17:11.221612] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.520 [2024-12-08 05:17:11.221713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.520 [2024-12-08 05:17:11.221738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.520 [2024-12-08 05:17:11.226732] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.520 [2024-12-08 05:17:11.226822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.520 [2024-12-08 05:17:11.226854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.520 [2024-12-08 05:17:11.231849] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.520 [2024-12-08 05:17:11.231937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.520 [2024-12-08 05:17:11.231963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.520 [2024-12-08 05:17:11.236921] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.520 [2024-12-08 05:17:11.237009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.520 [2024-12-08 05:17:11.237034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.520 [2024-12-08 05:17:11.242010] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.520 [2024-12-08 05:17:11.242101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.520 [2024-12-08 05:17:11.242126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.520 [2024-12-08 05:17:11.247087] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.520 [2024-12-08 05:17:11.247175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.520 [2024-12-08 05:17:11.247200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.520 [2024-12-08 05:17:11.252423] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.520 [2024-12-08 05:17:11.252515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.520 [2024-12-08 05:17:11.252546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.520 [2024-12-08 05:17:11.257636] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.520 [2024-12-08 05:17:11.257747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.520 [2024-12-08 05:17:11.257773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.520 [2024-12-08 05:17:11.262844] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.520 [2024-12-08 05:17:11.262935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.520 [2024-12-08 05:17:11.262961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.520 [2024-12-08 05:17:11.268007] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.520 [2024-12-08 05:17:11.268101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.520 [2024-12-08 05:17:11.268127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.520 [2024-12-08 05:17:11.273109] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.520 [2024-12-08 05:17:11.273198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.520 [2024-12-08 05:17:11.273232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.520 [2024-12-08 05:17:11.278222] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.520 [2024-12-08 05:17:11.278316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.520 [2024-12-08 05:17:11.278353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.520 [2024-12-08 05:17:11.283355] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.520 [2024-12-08 05:17:11.283462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.520 [2024-12-08 05:17:11.283489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.520 [2024-12-08 05:17:11.288539] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.520 [2024-12-08 05:17:11.288628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.520 [2024-12-08 05:17:11.288654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.520 [2024-12-08 05:17:11.293606] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.520 [2024-12-08 05:17:11.293708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.520 [2024-12-08 05:17:11.293734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.520 [2024-12-08 05:17:11.298756] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.520 [2024-12-08 05:17:11.298844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.520 [2024-12-08 05:17:11.298869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.780 [2024-12-08 05:17:11.303881] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.780 [2024-12-08 05:17:11.303970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.780 [2024-12-08 05:17:11.304002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.780 [2024-12-08 05:17:11.308977] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.780 [2024-12-08 05:17:11.309065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.780 [2024-12-08 05:17:11.309096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.780 [2024-12-08 05:17:11.314077] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.780 [2024-12-08 05:17:11.314164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.780 [2024-12-08 05:17:11.314189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.780 [2024-12-08 05:17:11.319151] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.780 [2024-12-08 05:17:11.319240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.780 [2024-12-08 05:17:11.319265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.780 [2024-12-08 05:17:11.324291] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.780 [2024-12-08 05:17:11.324381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.780 [2024-12-08 05:17:11.324412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.780 [2024-12-08 05:17:11.329379] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.780 [2024-12-08 05:17:11.329466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.780 [2024-12-08 05:17:11.329497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.780 [2024-12-08 05:17:11.334439] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.780 [2024-12-08 05:17:11.334526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.780 [2024-12-08 05:17:11.334559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.780 [2024-12-08 05:17:11.339574] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.780 [2024-12-08 05:17:11.339666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.780 [2024-12-08 05:17:11.339710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.780 [2024-12-08 05:17:11.344651] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.780 [2024-12-08 05:17:11.344753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.780 [2024-12-08 05:17:11.344778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.780 [2024-12-08 05:17:11.349751] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.780 [2024-12-08 05:17:11.349839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.780 [2024-12-08 05:17:11.349870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.780 [2024-12-08 05:17:11.354814] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.780 [2024-12-08 05:17:11.354903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.780 [2024-12-08 05:17:11.354934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.780 [2024-12-08 05:17:11.359871] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.780 [2024-12-08 05:17:11.359958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.780 [2024-12-08 05:17:11.359991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.780 [2024-12-08 05:17:11.364983] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.780 [2024-12-08 05:17:11.365078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.780 [2024-12-08 05:17:11.365103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.780 [2024-12-08 05:17:11.370069] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.780 [2024-12-08 05:17:11.370160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.780 [2024-12-08 05:17:11.370185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.780 [2024-12-08 05:17:11.375161] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.780 [2024-12-08 05:17:11.375248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.780 [2024-12-08 05:17:11.375281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.780 [2024-12-08 05:17:11.380293] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.780 [2024-12-08 05:17:11.380387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.780 [2024-12-08 05:17:11.380419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.780 [2024-12-08 05:17:11.385429] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.780 [2024-12-08 05:17:11.385518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.780 [2024-12-08 05:17:11.385550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.780 [2024-12-08 05:17:11.390581] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.780 [2024-12-08 05:17:11.390668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.780 [2024-12-08 05:17:11.390706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.780 [2024-12-08 05:17:11.395660] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.780 [2024-12-08 05:17:11.395765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.780 [2024-12-08 05:17:11.395790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.780 [2024-12-08 05:17:11.400887] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.780 [2024-12-08 05:17:11.400978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.780 [2024-12-08 05:17:11.401003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.781 [2024-12-08 05:17:11.405984] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.781 [2024-12-08 05:17:11.406071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.781 [2024-12-08 05:17:11.406096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.781 [2024-12-08 05:17:11.411155] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.781 [2024-12-08 05:17:11.411246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.781 [2024-12-08 05:17:11.411277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.781 [2024-12-08 05:17:11.417374] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.781 [2024-12-08 05:17:11.417484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.781 [2024-12-08 05:17:11.417516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.781 [2024-12-08 05:17:11.424042] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.781 [2024-12-08 05:17:11.424147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.781 [2024-12-08 05:17:11.424179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.781 [2024-12-08 05:17:11.430514] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.781 [2024-12-08 05:17:11.430618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.781 [2024-12-08 05:17:11.430649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.781 [2024-12-08 05:17:11.436900] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.781 [2024-12-08 05:17:11.437004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.781 [2024-12-08 05:17:11.437035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.781 [2024-12-08 05:17:11.443323] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.781 [2024-12-08 05:17:11.443441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.781 [2024-12-08 05:17:11.443472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.781 [2024-12-08 05:17:11.449816] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.781 [2024-12-08 05:17:11.449921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.781 [2024-12-08 05:17:11.449954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.781 [2024-12-08 05:17:11.456471] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.781 [2024-12-08 05:17:11.456585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.781 [2024-12-08 05:17:11.456627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.781 [2024-12-08 05:17:11.463029] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182c860) with pdu=0x2000190fef90 00:18:21.781 [2024-12-08 05:17:11.463133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.781 [2024-12-08 05:17:11.463166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.781 00:18:21.781 Latency(us) 00:18:21.781 [2024-12-08T05:17:11.567Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:21.781 [2024-12-08T05:17:11.567Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:18:21.781 nvme0n1 : 2.00 5814.93 726.87 0.00 0.00 2745.31 2085.24 7626.01 00:18:21.781 [2024-12-08T05:17:11.567Z] =================================================================================================================== 00:18:21.781 [2024-12-08T05:17:11.567Z] Total : 5814.93 726.87 0.00 0.00 2745.31 2085.24 7626.01 00:18:21.781 0 00:18:21.781 05:17:11 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:21.781 05:17:11 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:21.781 05:17:11 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:21.781 05:17:11 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:21.781 | .driver_specific 00:18:21.781 | .nvme_error 00:18:21.781 | .status_code 00:18:21.781 | .command_transient_transport_error' 00:18:22.040 05:17:11 -- host/digest.sh@71 -- # (( 375 > 0 )) 00:18:22.040 05:17:11 -- host/digest.sh@73 -- # killprocess 84237 00:18:22.040 05:17:11 -- common/autotest_common.sh@936 -- # '[' -z 84237 ']' 00:18:22.040 05:17:11 -- common/autotest_common.sh@940 -- # kill -0 84237 00:18:22.040 05:17:11 -- common/autotest_common.sh@941 -- # uname 00:18:22.040 05:17:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:22.040 05:17:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84237 00:18:22.040 killing process with pid 84237 00:18:22.040 Received shutdown signal, test time was about 2.000000 seconds 00:18:22.040 00:18:22.040 Latency(us) 00:18:22.040 [2024-12-08T05:17:11.826Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:22.040 [2024-12-08T05:17:11.826Z] =================================================================================================================== 00:18:22.040 [2024-12-08T05:17:11.826Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:22.040 05:17:11 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:22.040 05:17:11 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:22.040 05:17:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84237' 00:18:22.040 05:17:11 -- common/autotest_common.sh@955 -- # kill 84237 00:18:22.040 05:17:11 -- common/autotest_common.sh@960 -- # wait 84237 00:18:22.297 05:17:11 -- host/digest.sh@115 -- # killprocess 84046 00:18:22.297 05:17:11 -- common/autotest_common.sh@936 -- # '[' -z 84046 ']' 00:18:22.297 05:17:11 -- common/autotest_common.sh@940 -- # kill -0 84046 00:18:22.297 05:17:11 -- common/autotest_common.sh@941 -- # uname 00:18:22.297 05:17:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:22.297 05:17:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84046 00:18:22.297 killing process with pid 84046 00:18:22.297 05:17:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:22.297 05:17:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:22.297 05:17:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84046' 00:18:22.297 05:17:11 -- common/autotest_common.sh@955 -- # kill 84046 00:18:22.297 05:17:11 -- common/autotest_common.sh@960 -- # wait 84046 00:18:22.297 ************************************ 00:18:22.297 END TEST nvmf_digest_error 00:18:22.297 ************************************ 00:18:22.297 00:18:22.297 real 0m16.392s 00:18:22.297 user 0m32.646s 00:18:22.297 sys 0m4.377s 00:18:22.297 05:17:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:22.297 05:17:12 -- common/autotest_common.sh@10 -- # set +x 00:18:22.555 05:17:12 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:18:22.555 05:17:12 -- host/digest.sh@139 -- # nvmftestfini 00:18:22.555 05:17:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:22.555 05:17:12 -- nvmf/common.sh@116 -- # sync 00:18:22.555 05:17:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:22.555 05:17:12 -- nvmf/common.sh@119 -- # set +e 00:18:22.555 05:17:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:22.555 05:17:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:22.555 rmmod nvme_tcp 00:18:22.555 rmmod nvme_fabrics 00:18:22.555 rmmod nvme_keyring 00:18:22.555 05:17:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:22.555 05:17:12 -- nvmf/common.sh@123 -- # set -e 00:18:22.555 05:17:12 -- nvmf/common.sh@124 -- # return 0 00:18:22.555 05:17:12 -- nvmf/common.sh@477 -- # '[' -n 84046 ']' 00:18:22.555 05:17:12 -- nvmf/common.sh@478 -- # killprocess 84046 00:18:22.555 05:17:12 -- common/autotest_common.sh@936 -- # '[' -z 84046 ']' 00:18:22.555 05:17:12 -- common/autotest_common.sh@940 -- # kill -0 84046 00:18:22.555 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (84046) - No such process 00:18:22.555 Process with pid 84046 is not found 00:18:22.555 05:17:12 -- common/autotest_common.sh@963 -- # echo 'Process with pid 84046 is not found' 00:18:22.555 05:17:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:22.555 05:17:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:22.555 05:17:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:22.555 05:17:12 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:22.555 05:17:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:22.555 05:17:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:22.555 05:17:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:22.555 05:17:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:22.555 05:17:12 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:22.555 ************************************ 00:18:22.555 END TEST nvmf_digest 00:18:22.555 ************************************ 00:18:22.555 00:18:22.555 real 0m32.228s 00:18:22.555 user 1m2.521s 00:18:22.555 sys 0m9.060s 00:18:22.555 05:17:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:22.555 05:17:12 -- common/autotest_common.sh@10 -- # set +x 00:18:22.555 05:17:12 -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:18:22.555 05:17:12 -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:18:22.555 05:17:12 -- nvmf/nvmf.sh@116 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:18:22.555 05:17:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:22.555 05:17:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:22.555 05:17:12 -- common/autotest_common.sh@10 -- # set +x 00:18:22.555 ************************************ 00:18:22.555 START TEST nvmf_multipath 00:18:22.555 ************************************ 00:18:22.555 05:17:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:18:22.813 * Looking for test storage... 00:18:22.813 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:22.813 05:17:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:22.813 05:17:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:22.813 05:17:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:22.813 05:17:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:22.813 05:17:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:22.813 05:17:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:22.813 05:17:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:22.813 05:17:12 -- scripts/common.sh@335 -- # IFS=.-: 00:18:22.813 05:17:12 -- scripts/common.sh@335 -- # read -ra ver1 00:18:22.813 05:17:12 -- scripts/common.sh@336 -- # IFS=.-: 00:18:22.813 05:17:12 -- scripts/common.sh@336 -- # read -ra ver2 00:18:22.813 05:17:12 -- scripts/common.sh@337 -- # local 'op=<' 00:18:22.813 05:17:12 -- scripts/common.sh@339 -- # ver1_l=2 00:18:22.813 05:17:12 -- scripts/common.sh@340 -- # ver2_l=1 00:18:22.813 05:17:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:22.813 05:17:12 -- scripts/common.sh@343 -- # case "$op" in 00:18:22.813 05:17:12 -- scripts/common.sh@344 -- # : 1 00:18:22.813 05:17:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:22.813 05:17:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:22.813 05:17:12 -- scripts/common.sh@364 -- # decimal 1 00:18:22.813 05:17:12 -- scripts/common.sh@352 -- # local d=1 00:18:22.813 05:17:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:22.813 05:17:12 -- scripts/common.sh@354 -- # echo 1 00:18:22.813 05:17:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:22.813 05:17:12 -- scripts/common.sh@365 -- # decimal 2 00:18:22.813 05:17:12 -- scripts/common.sh@352 -- # local d=2 00:18:22.814 05:17:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:22.814 05:17:12 -- scripts/common.sh@354 -- # echo 2 00:18:22.814 05:17:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:22.814 05:17:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:22.814 05:17:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:22.814 05:17:12 -- scripts/common.sh@367 -- # return 0 00:18:22.814 05:17:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:22.814 05:17:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:22.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.814 --rc genhtml_branch_coverage=1 00:18:22.814 --rc genhtml_function_coverage=1 00:18:22.814 --rc genhtml_legend=1 00:18:22.814 --rc geninfo_all_blocks=1 00:18:22.814 --rc geninfo_unexecuted_blocks=1 00:18:22.814 00:18:22.814 ' 00:18:22.814 05:17:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:22.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.814 --rc genhtml_branch_coverage=1 00:18:22.814 --rc genhtml_function_coverage=1 00:18:22.814 --rc genhtml_legend=1 00:18:22.814 --rc geninfo_all_blocks=1 00:18:22.814 --rc geninfo_unexecuted_blocks=1 00:18:22.814 00:18:22.814 ' 00:18:22.814 05:17:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:22.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.814 --rc genhtml_branch_coverage=1 00:18:22.814 --rc genhtml_function_coverage=1 00:18:22.814 --rc genhtml_legend=1 00:18:22.814 --rc geninfo_all_blocks=1 00:18:22.814 --rc geninfo_unexecuted_blocks=1 00:18:22.814 00:18:22.814 ' 00:18:22.814 05:17:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:22.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.814 --rc genhtml_branch_coverage=1 00:18:22.814 --rc genhtml_function_coverage=1 00:18:22.814 --rc genhtml_legend=1 00:18:22.814 --rc geninfo_all_blocks=1 00:18:22.814 --rc geninfo_unexecuted_blocks=1 00:18:22.814 00:18:22.814 ' 00:18:22.814 05:17:12 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:22.814 05:17:12 -- nvmf/common.sh@7 -- # uname -s 00:18:22.814 05:17:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:22.814 05:17:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:22.814 05:17:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:22.814 05:17:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:22.814 05:17:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:22.814 05:17:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:22.814 05:17:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:22.814 05:17:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:22.814 05:17:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:22.814 05:17:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:22.814 05:17:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:18:22.814 05:17:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:18:22.814 05:17:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:22.814 05:17:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:22.814 05:17:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:22.814 05:17:12 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:22.814 05:17:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:22.814 05:17:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:22.814 05:17:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:22.814 05:17:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.814 05:17:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.814 05:17:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.814 05:17:12 -- paths/export.sh@5 -- # export PATH 00:18:22.814 05:17:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.814 05:17:12 -- nvmf/common.sh@46 -- # : 0 00:18:22.814 05:17:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:22.814 05:17:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:22.814 05:17:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:22.814 05:17:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:22.814 05:17:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:22.814 05:17:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:22.814 05:17:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:22.814 05:17:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:22.814 05:17:12 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:22.814 05:17:12 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:22.814 05:17:12 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:22.814 05:17:12 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:22.814 05:17:12 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:22.814 05:17:12 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:18:22.814 05:17:12 -- host/multipath.sh@30 -- # nvmftestinit 00:18:22.814 05:17:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:22.814 05:17:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:22.814 05:17:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:22.814 05:17:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:22.814 05:17:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:22.814 05:17:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:22.814 05:17:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:22.814 05:17:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:22.814 05:17:12 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:22.814 05:17:12 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:22.814 05:17:12 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:22.814 05:17:12 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:22.814 05:17:12 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:22.814 05:17:12 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:22.814 05:17:12 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:22.814 05:17:12 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:22.814 05:17:12 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:22.814 05:17:12 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:22.814 05:17:12 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:22.814 05:17:12 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:22.814 05:17:12 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:22.814 05:17:12 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:22.814 05:17:12 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:22.814 05:17:12 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:22.814 05:17:12 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:22.814 05:17:12 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:22.814 05:17:12 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:22.814 05:17:12 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:22.814 Cannot find device "nvmf_tgt_br" 00:18:22.814 05:17:12 -- nvmf/common.sh@154 -- # true 00:18:22.814 05:17:12 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:22.814 Cannot find device "nvmf_tgt_br2" 00:18:22.814 05:17:12 -- nvmf/common.sh@155 -- # true 00:18:22.814 05:17:12 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:22.814 05:17:12 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:22.814 Cannot find device "nvmf_tgt_br" 00:18:22.814 05:17:12 -- nvmf/common.sh@157 -- # true 00:18:22.814 05:17:12 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:22.814 Cannot find device "nvmf_tgt_br2" 00:18:22.814 05:17:12 -- nvmf/common.sh@158 -- # true 00:18:22.814 05:17:12 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:23.073 05:17:12 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:23.073 05:17:12 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:23.073 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:23.073 05:17:12 -- nvmf/common.sh@161 -- # true 00:18:23.073 05:17:12 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:23.073 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:23.073 05:17:12 -- nvmf/common.sh@162 -- # true 00:18:23.073 05:17:12 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:23.073 05:17:12 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:23.073 05:17:12 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:23.073 05:17:12 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:23.073 05:17:12 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:23.073 05:17:12 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:23.073 05:17:12 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:23.073 05:17:12 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:23.073 05:17:12 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:23.073 05:17:12 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:23.073 05:17:12 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:23.073 05:17:12 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:23.073 05:17:12 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:23.073 05:17:12 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:23.073 05:17:12 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:23.073 05:17:12 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:23.073 05:17:12 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:23.073 05:17:12 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:23.073 05:17:12 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:23.073 05:17:12 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:23.073 05:17:12 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:23.073 05:17:12 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:23.073 05:17:12 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:23.073 05:17:12 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:23.073 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:23.073 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:18:23.073 00:18:23.073 --- 10.0.0.2 ping statistics --- 00:18:23.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:23.073 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:18:23.073 05:17:12 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:23.073 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:23.073 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:18:23.073 00:18:23.073 --- 10.0.0.3 ping statistics --- 00:18:23.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:23.073 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:18:23.073 05:17:12 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:23.073 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:23.073 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:18:23.073 00:18:23.073 --- 10.0.0.1 ping statistics --- 00:18:23.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:23.073 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:18:23.073 05:17:12 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:23.073 05:17:12 -- nvmf/common.sh@421 -- # return 0 00:18:23.073 05:17:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:23.073 05:17:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:23.073 05:17:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:23.073 05:17:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:23.073 05:17:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:23.073 05:17:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:23.073 05:17:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:23.073 05:17:12 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:18:23.073 05:17:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:23.073 05:17:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:23.073 05:17:12 -- common/autotest_common.sh@10 -- # set +x 00:18:23.073 05:17:12 -- nvmf/common.sh@469 -- # nvmfpid=84509 00:18:23.073 05:17:12 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:23.073 05:17:12 -- nvmf/common.sh@470 -- # waitforlisten 84509 00:18:23.073 05:17:12 -- common/autotest_common.sh@829 -- # '[' -z 84509 ']' 00:18:23.073 05:17:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:23.073 05:17:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:23.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:23.073 05:17:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:23.073 05:17:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:23.073 05:17:12 -- common/autotest_common.sh@10 -- # set +x 00:18:23.348 [2024-12-08 05:17:12.887247] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:23.348 [2024-12-08 05:17:12.887356] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:23.348 [2024-12-08 05:17:13.029481] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:23.348 [2024-12-08 05:17:13.063402] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:23.348 [2024-12-08 05:17:13.063557] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:23.348 [2024-12-08 05:17:13.063570] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:23.348 [2024-12-08 05:17:13.063579] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:23.348 [2024-12-08 05:17:13.063741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:23.348 [2024-12-08 05:17:13.063754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:24.336 05:17:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:24.336 05:17:13 -- common/autotest_common.sh@862 -- # return 0 00:18:24.336 05:17:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:24.336 05:17:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:24.336 05:17:13 -- common/autotest_common.sh@10 -- # set +x 00:18:24.336 05:17:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:24.336 05:17:13 -- host/multipath.sh@33 -- # nvmfapp_pid=84509 00:18:24.336 05:17:13 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:24.618 [2024-12-08 05:17:14.204436] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:24.618 05:17:14 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:24.875 Malloc0 00:18:24.875 05:17:14 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:18:25.132 05:17:14 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:25.390 05:17:15 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:25.653 [2024-12-08 05:17:15.338097] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:25.653 05:17:15 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:25.909 [2024-12-08 05:17:15.626282] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:25.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:25.909 05:17:15 -- host/multipath.sh@44 -- # bdevperf_pid=84565 00:18:25.909 05:17:15 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:18:25.909 05:17:15 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:25.909 05:17:15 -- host/multipath.sh@47 -- # waitforlisten 84565 /var/tmp/bdevperf.sock 00:18:25.909 05:17:15 -- common/autotest_common.sh@829 -- # '[' -z 84565 ']' 00:18:25.909 05:17:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:25.909 05:17:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:25.909 05:17:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:25.909 05:17:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:25.909 05:17:15 -- common/autotest_common.sh@10 -- # set +x 00:18:27.277 05:17:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:27.277 05:17:16 -- common/autotest_common.sh@862 -- # return 0 00:18:27.277 05:17:16 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:27.277 05:17:16 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:18:27.841 Nvme0n1 00:18:27.841 05:17:17 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:28.098 Nvme0n1 00:18:28.098 05:17:17 -- host/multipath.sh@78 -- # sleep 1 00:18:28.098 05:17:17 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:29.031 05:17:18 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:18:29.031 05:17:18 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:29.287 05:17:19 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:29.544 05:17:19 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:18:29.544 05:17:19 -- host/multipath.sh@65 -- # dtrace_pid=84619 00:18:29.544 05:17:19 -- host/multipath.sh@66 -- # sleep 6 00:18:29.544 05:17:19 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 84509 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:36.126 05:17:25 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:36.126 05:17:25 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:36.126 05:17:25 -- host/multipath.sh@67 -- # active_port=4421 00:18:36.126 05:17:25 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:36.126 Attaching 4 probes... 00:18:36.126 @path[10.0.0.2, 4421]: 16253 00:18:36.126 @path[10.0.0.2, 4421]: 16416 00:18:36.126 @path[10.0.0.2, 4421]: 17195 00:18:36.126 @path[10.0.0.2, 4421]: 17631 00:18:36.126 @path[10.0.0.2, 4421]: 17906 00:18:36.126 05:17:25 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:36.126 05:17:25 -- host/multipath.sh@69 -- # sed -n 1p 00:18:36.126 05:17:25 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:36.126 05:17:25 -- host/multipath.sh@69 -- # port=4421 00:18:36.126 05:17:25 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:36.126 05:17:25 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:36.126 05:17:25 -- host/multipath.sh@72 -- # kill 84619 00:18:36.126 05:17:25 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:36.126 05:17:25 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:18:36.126 05:17:25 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:36.383 05:17:26 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:36.948 05:17:26 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:18:36.948 05:17:26 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 84509 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:36.948 05:17:26 -- host/multipath.sh@65 -- # dtrace_pid=84740 00:18:36.948 05:17:26 -- host/multipath.sh@66 -- # sleep 6 00:18:43.501 05:17:32 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:43.501 05:17:32 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:43.501 05:17:32 -- host/multipath.sh@67 -- # active_port=4420 00:18:43.501 05:17:32 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:43.501 Attaching 4 probes... 00:18:43.501 @path[10.0.0.2, 4420]: 16705 00:18:43.501 @path[10.0.0.2, 4420]: 17827 00:18:43.501 @path[10.0.0.2, 4420]: 17730 00:18:43.501 @path[10.0.0.2, 4420]: 17215 00:18:43.501 @path[10.0.0.2, 4420]: 16728 00:18:43.501 05:17:32 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:43.501 05:17:32 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:43.501 05:17:32 -- host/multipath.sh@69 -- # sed -n 1p 00:18:43.501 05:17:32 -- host/multipath.sh@69 -- # port=4420 00:18:43.501 05:17:32 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:43.501 05:17:32 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:43.501 05:17:32 -- host/multipath.sh@72 -- # kill 84740 00:18:43.501 05:17:32 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:43.501 05:17:32 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:18:43.501 05:17:32 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:43.501 05:17:33 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:43.759 05:17:33 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:18:43.759 05:17:33 -- host/multipath.sh@65 -- # dtrace_pid=84848 00:18:43.759 05:17:33 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 84509 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:43.759 05:17:33 -- host/multipath.sh@66 -- # sleep 6 00:18:50.339 05:17:39 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:50.339 05:17:39 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:50.339 05:17:39 -- host/multipath.sh@67 -- # active_port=4421 00:18:50.339 05:17:39 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:50.339 Attaching 4 probes... 00:18:50.339 @path[10.0.0.2, 4421]: 14215 00:18:50.339 @path[10.0.0.2, 4421]: 17865 00:18:50.339 @path[10.0.0.2, 4421]: 17886 00:18:50.339 @path[10.0.0.2, 4421]: 17679 00:18:50.339 @path[10.0.0.2, 4421]: 17951 00:18:50.339 05:17:39 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:50.339 05:17:39 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:50.339 05:17:39 -- host/multipath.sh@69 -- # sed -n 1p 00:18:50.339 05:17:39 -- host/multipath.sh@69 -- # port=4421 00:18:50.339 05:17:39 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:50.339 05:17:39 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:50.339 05:17:39 -- host/multipath.sh@72 -- # kill 84848 00:18:50.339 05:17:39 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:50.339 05:17:39 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:18:50.339 05:17:39 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:50.339 05:17:40 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:50.596 05:17:40 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:18:50.596 05:17:40 -- host/multipath.sh@65 -- # dtrace_pid=84966 00:18:50.596 05:17:40 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 84509 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:50.596 05:17:40 -- host/multipath.sh@66 -- # sleep 6 00:18:57.198 05:17:46 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:57.198 05:17:46 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:18:57.198 05:17:46 -- host/multipath.sh@67 -- # active_port= 00:18:57.198 05:17:46 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:57.198 Attaching 4 probes... 00:18:57.198 00:18:57.199 00:18:57.199 00:18:57.199 00:18:57.199 00:18:57.199 05:17:46 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:57.199 05:17:46 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:57.199 05:17:46 -- host/multipath.sh@69 -- # sed -n 1p 00:18:57.199 05:17:46 -- host/multipath.sh@69 -- # port= 00:18:57.199 05:17:46 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:18:57.199 05:17:46 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:18:57.199 05:17:46 -- host/multipath.sh@72 -- # kill 84966 00:18:57.199 05:17:46 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:57.199 05:17:46 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:18:57.199 05:17:46 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:57.199 05:17:46 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:57.457 05:17:47 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:18:57.457 05:17:47 -- host/multipath.sh@65 -- # dtrace_pid=85084 00:18:57.457 05:17:47 -- host/multipath.sh@66 -- # sleep 6 00:18:57.457 05:17:47 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 84509 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:04.014 05:17:53 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:04.014 05:17:53 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:04.014 05:17:53 -- host/multipath.sh@67 -- # active_port=4421 00:19:04.014 05:17:53 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:04.014 Attaching 4 probes... 00:19:04.014 @path[10.0.0.2, 4421]: 17427 00:19:04.014 @path[10.0.0.2, 4421]: 17496 00:19:04.014 @path[10.0.0.2, 4421]: 17642 00:19:04.014 @path[10.0.0.2, 4421]: 17488 00:19:04.014 @path[10.0.0.2, 4421]: 17585 00:19:04.014 05:17:53 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:04.014 05:17:53 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:04.014 05:17:53 -- host/multipath.sh@69 -- # sed -n 1p 00:19:04.014 05:17:53 -- host/multipath.sh@69 -- # port=4421 00:19:04.014 05:17:53 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:04.014 05:17:53 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:04.014 05:17:53 -- host/multipath.sh@72 -- # kill 85084 00:19:04.014 05:17:53 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:04.014 05:17:53 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:04.272 [2024-12-08 05:17:53.825936] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132d80 is same with the state(5) to be set 00:19:04.272 [2024-12-08 05:17:53.825996] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132d80 is same with the state(5) to be set 00:19:04.272 [2024-12-08 05:17:53.826010] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132d80 is same with the state(5) to be set 00:19:04.272 [2024-12-08 05:17:53.826019] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132d80 is same with the state(5) to be set 00:19:04.272 [2024-12-08 05:17:53.826027] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132d80 is same with the state(5) to be set 00:19:04.272 [2024-12-08 05:17:53.826036] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132d80 is same with the state(5) to be set 00:19:04.272 [2024-12-08 05:17:53.826044] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132d80 is same with the state(5) to be set 00:19:04.272 [2024-12-08 05:17:53.826053] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132d80 is same with the state(5) to be set 00:19:04.272 [2024-12-08 05:17:53.826061] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132d80 is same with the state(5) to be set 00:19:04.272 [2024-12-08 05:17:53.826070] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132d80 is same with the state(5) to be set 00:19:04.272 [2024-12-08 05:17:53.826078] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132d80 is same with the state(5) to be set 00:19:04.272 [2024-12-08 05:17:53.826086] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132d80 is same with the state(5) to be set 00:19:04.272 [2024-12-08 05:17:53.826094] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132d80 is same with the state(5) to be set 00:19:04.272 [2024-12-08 05:17:53.826102] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132d80 is same with the state(5) to be set 00:19:04.272 [2024-12-08 05:17:53.826110] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132d80 is same with the state(5) to be set 00:19:04.272 [2024-12-08 05:17:53.826118] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132d80 is same with the state(5) to be set 00:19:04.272 [2024-12-08 05:17:53.826127] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132d80 is same with the state(5) to be set 00:19:04.272 [2024-12-08 05:17:53.826135] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132d80 is same with the state(5) to be set 00:19:04.272 [2024-12-08 05:17:53.826143] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132d80 is same with the state(5) to be set 00:19:04.272 [2024-12-08 05:17:53.826151] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132d80 is same with the state(5) to be set 00:19:04.272 [2024-12-08 05:17:53.826159] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132d80 is same with the state(5) to be set 00:19:04.272 [2024-12-08 05:17:53.826167] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132d80 is same with the state(5) to be set 00:19:04.272 [2024-12-08 05:17:53.826176] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132d80 is same with the state(5) to be set 00:19:04.272 [2024-12-08 05:17:53.826184] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132d80 is same with the state(5) to be set 00:19:04.272 [2024-12-08 05:17:53.826192] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132d80 is same with the state(5) to be set 00:19:04.272 [2024-12-08 05:17:53.826200] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132d80 is same with the state(5) to be set 00:19:04.272 [2024-12-08 05:17:53.826208] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132d80 is same with the state(5) to be set 00:19:04.272 [2024-12-08 05:17:53.826216] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132d80 is same with the state(5) to be set 00:19:04.272 [2024-12-08 05:17:53.826229] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132d80 is same with the state(5) to be set 00:19:04.272 [2024-12-08 05:17:53.826237] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132d80 is same with the state(5) to be set 00:19:04.272 [2024-12-08 05:17:53.826245] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132d80 is same with the state(5) to be set 00:19:04.272 [2024-12-08 05:17:53.826253] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132d80 is same with the state(5) to be set 00:19:04.272 [2024-12-08 05:17:53.826262] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132d80 is same with the state(5) to be set 00:19:04.272 [2024-12-08 05:17:53.826270] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132d80 is same with the state(5) to be set 00:19:04.272 [2024-12-08 05:17:53.826280] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132d80 is same with the state(5) to be set 00:19:04.272 [2024-12-08 05:17:53.826289] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132d80 is same with the state(5) to be set 00:19:04.272 [2024-12-08 05:17:53.826297] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132d80 is same with the state(5) to be set 00:19:04.272 [2024-12-08 05:17:53.826305] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132d80 is same with the state(5) to be set 00:19:04.272 [2024-12-08 05:17:53.826313] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132d80 is same with the state(5) to be set 00:19:04.272 [2024-12-08 05:17:53.826321] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132d80 is same with the state(5) to be set 00:19:04.272 [2024-12-08 05:17:53.826330] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132d80 is same with the state(5) to be set 00:19:04.272 [2024-12-08 05:17:53.826338] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132d80 is same with the state(5) to be set 00:19:04.272 [2024-12-08 05:17:53.826346] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132d80 is same with the state(5) to be set 00:19:04.272 [2024-12-08 05:17:53.826354] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132d80 is same with the state(5) to be set 00:19:04.272 [2024-12-08 05:17:53.826362] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132d80 is same with the state(5) to be set 00:19:04.272 [2024-12-08 05:17:53.826370] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132d80 is same with the state(5) to be set 00:19:04.272 [2024-12-08 05:17:53.826378] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132d80 is same with the state(5) to be set 00:19:04.272 [2024-12-08 05:17:53.826386] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132d80 is same with the state(5) to be set 00:19:04.272 [2024-12-08 05:17:53.826395] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132d80 is same with the state(5) to be set 00:19:04.272 [2024-12-08 05:17:53.826403] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132d80 is same with the state(5) to be set 00:19:04.272 [2024-12-08 05:17:53.826411] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132d80 is same with the state(5) to be set 00:19:04.272 [2024-12-08 05:17:53.826419] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132d80 is same with the state(5) to be set 00:19:04.272 05:17:53 -- host/multipath.sh@101 -- # sleep 1 00:19:05.205 05:17:54 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:19:05.205 05:17:54 -- host/multipath.sh@65 -- # dtrace_pid=85207 00:19:05.205 05:17:54 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 84509 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:05.205 05:17:54 -- host/multipath.sh@66 -- # sleep 6 00:19:11.841 05:18:00 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:11.841 05:18:00 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:19:11.841 05:18:01 -- host/multipath.sh@67 -- # active_port=4420 00:19:11.841 05:18:01 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:11.841 Attaching 4 probes... 00:19:11.841 @path[10.0.0.2, 4420]: 17762 00:19:11.841 @path[10.0.0.2, 4420]: 17050 00:19:11.841 @path[10.0.0.2, 4420]: 17669 00:19:11.841 @path[10.0.0.2, 4420]: 18040 00:19:11.841 @path[10.0.0.2, 4420]: 17877 00:19:11.841 05:18:01 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:11.841 05:18:01 -- host/multipath.sh@69 -- # sed -n 1p 00:19:11.841 05:18:01 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:11.841 05:18:01 -- host/multipath.sh@69 -- # port=4420 00:19:11.841 05:18:01 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:19:11.841 05:18:01 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:19:11.841 05:18:01 -- host/multipath.sh@72 -- # kill 85207 00:19:11.841 05:18:01 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:11.841 05:18:01 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:11.841 [2024-12-08 05:18:01.447975] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:11.841 05:18:01 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:12.099 05:18:01 -- host/multipath.sh@111 -- # sleep 6 00:19:18.711 05:18:07 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:19:18.711 05:18:07 -- host/multipath.sh@65 -- # dtrace_pid=85387 00:19:18.711 05:18:07 -- host/multipath.sh@66 -- # sleep 6 00:19:18.711 05:18:07 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 84509 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:25.286 05:18:13 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:25.286 05:18:13 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:25.286 05:18:14 -- host/multipath.sh@67 -- # active_port=4421 00:19:25.286 05:18:14 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:25.286 Attaching 4 probes... 00:19:25.286 @path[10.0.0.2, 4421]: 16256 00:19:25.286 @path[10.0.0.2, 4421]: 17610 00:19:25.286 @path[10.0.0.2, 4421]: 17622 00:19:25.286 @path[10.0.0.2, 4421]: 17953 00:19:25.286 @path[10.0.0.2, 4421]: 17997 00:19:25.286 05:18:14 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:25.286 05:18:14 -- host/multipath.sh@69 -- # sed -n 1p 00:19:25.286 05:18:14 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:25.286 05:18:14 -- host/multipath.sh@69 -- # port=4421 00:19:25.286 05:18:14 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:25.286 05:18:14 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:25.286 05:18:14 -- host/multipath.sh@72 -- # kill 85387 00:19:25.286 05:18:14 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:25.286 05:18:14 -- host/multipath.sh@114 -- # killprocess 84565 00:19:25.286 05:18:14 -- common/autotest_common.sh@936 -- # '[' -z 84565 ']' 00:19:25.286 05:18:14 -- common/autotest_common.sh@940 -- # kill -0 84565 00:19:25.286 05:18:14 -- common/autotest_common.sh@941 -- # uname 00:19:25.286 05:18:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:25.286 05:18:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84565 00:19:25.286 killing process with pid 84565 00:19:25.286 05:18:14 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:25.286 05:18:14 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:25.286 05:18:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84565' 00:19:25.286 05:18:14 -- common/autotest_common.sh@955 -- # kill 84565 00:19:25.286 05:18:14 -- common/autotest_common.sh@960 -- # wait 84565 00:19:25.286 Connection closed with partial response: 00:19:25.286 00:19:25.286 00:19:25.286 05:18:14 -- host/multipath.sh@116 -- # wait 84565 00:19:25.286 05:18:14 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:25.286 [2024-12-08 05:17:15.715392] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:25.286 [2024-12-08 05:17:15.715592] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84565 ] 00:19:25.286 [2024-12-08 05:17:15.882525] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.286 [2024-12-08 05:17:15.925276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:25.286 Running I/O for 90 seconds... 00:19:25.286 [2024-12-08 05:17:26.413951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:24392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.286 [2024-12-08 05:17:26.414026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:25.286 [2024-12-08 05:17:26.414084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.286 [2024-12-08 05:17:26.414105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:25.286 [2024-12-08 05:17:26.414129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.286 [2024-12-08 05:17:26.414144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:25.286 [2024-12-08 05:17:26.414166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:24416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.286 [2024-12-08 05:17:26.414181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:25.286 [2024-12-08 05:17:26.414202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.286 [2024-12-08 05:17:26.414217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:25.286 [2024-12-08 05:17:26.414238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.286 [2024-12-08 05:17:26.414253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:25.286 [2024-12-08 05:17:26.414279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:24440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.286 [2024-12-08 05:17:26.414295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:25.286 [2024-12-08 05:17:26.414317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.286 [2024-12-08 05:17:26.414332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:25.286 [2024-12-08 05:17:26.414353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.286 [2024-12-08 05:17:26.414367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:25.286 [2024-12-08 05:17:26.414389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.286 [2024-12-08 05:17:26.414403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:25.286 [2024-12-08 05:17:26.414424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.286 [2024-12-08 05:17:26.414454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:25.286 [2024-12-08 05:17:26.414478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.286 [2024-12-08 05:17:26.414493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:25.286 [2024-12-08 05:17:26.414515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.286 [2024-12-08 05:17:26.414529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:25.286 [2024-12-08 05:17:26.414550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.286 [2024-12-08 05:17:26.414564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:25.286 [2024-12-08 05:17:26.414586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.286 [2024-12-08 05:17:26.414600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:25.286 [2024-12-08 05:17:26.414621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.286 [2024-12-08 05:17:26.414635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:25.286 [2024-12-08 05:17:26.414656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.287 [2024-12-08 05:17:26.414687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:25.287 [2024-12-08 05:17:26.414713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:24464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.287 [2024-12-08 05:17:26.414728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:25.287 [2024-12-08 05:17:26.414749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.287 [2024-12-08 05:17:26.414764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:25.287 [2024-12-08 05:17:26.414785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:24480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.287 [2024-12-08 05:17:26.414800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:25.287 [2024-12-08 05:17:26.414821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:24488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.287 [2024-12-08 05:17:26.414835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:25.287 [2024-12-08 05:17:26.414856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.287 [2024-12-08 05:17:26.414871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:25.287 [2024-12-08 05:17:26.414892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:24504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.287 [2024-12-08 05:17:26.414906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:25.287 [2024-12-08 05:17:26.414938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.287 [2024-12-08 05:17:26.414954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:25.287 [2024-12-08 05:17:26.414976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:24520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.287 [2024-12-08 05:17:26.414990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:25.287 [2024-12-08 05:17:26.415017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.287 [2024-12-08 05:17:26.415045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:25.287 [2024-12-08 05:17:26.415083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:24536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.287 [2024-12-08 05:17:26.415113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.287 [2024-12-08 05:17:26.415149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:24544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.287 [2024-12-08 05:17:26.415176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:25.287 [2024-12-08 05:17:26.415770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:24552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.287 [2024-12-08 05:17:26.415812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.287 [2024-12-08 05:17:26.415848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:24560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.287 [2024-12-08 05:17:26.415873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:25.287 [2024-12-08 05:17:26.415911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:24568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.287 [2024-12-08 05:17:26.415936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:25.287 [2024-12-08 05:17:26.415969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:24576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.287 [2024-12-08 05:17:26.415993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:25.287 [2024-12-08 05:17:26.416025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:24584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.287 [2024-12-08 05:17:26.416050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:25.287 [2024-12-08 05:17:26.416082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:24592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.287 [2024-12-08 05:17:26.416107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:25.287 [2024-12-08 05:17:26.416139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.287 [2024-12-08 05:17:26.416164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:25.287 [2024-12-08 05:17:26.416213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.287 [2024-12-08 05:17:26.416240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:25.287 [2024-12-08 05:17:26.416275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.287 [2024-12-08 05:17:26.416300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:25.287 [2024-12-08 05:17:26.416332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.287 [2024-12-08 05:17:26.416357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:25.287 [2024-12-08 05:17:26.416392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:24000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.287 [2024-12-08 05:17:26.416420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:25.287 [2024-12-08 05:17:26.416457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.287 [2024-12-08 05:17:26.416483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:25.287 [2024-12-08 05:17:26.416518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.287 [2024-12-08 05:17:26.416543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:25.287 [2024-12-08 05:17:26.416575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:24032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.287 [2024-12-08 05:17:26.416601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:25.287 [2024-12-08 05:17:26.416633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.287 [2024-12-08 05:17:26.416658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:25.287 [2024-12-08 05:17:26.416712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:24608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.287 [2024-12-08 05:17:26.416740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:25.287 [2024-12-08 05:17:26.416773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.287 [2024-12-08 05:17:26.416799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:25.287 [2024-12-08 05:17:26.416833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:24624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.287 [2024-12-08 05:17:26.416860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:25.287 [2024-12-08 05:17:26.416895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:24632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.287 [2024-12-08 05:17:26.416921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:25.287 [2024-12-08 05:17:26.416955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:24640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.287 [2024-12-08 05:17:26.416997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:25.287 [2024-12-08 05:17:26.417034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.287 [2024-12-08 05:17:26.417063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:25.287 [2024-12-08 05:17:26.417099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:24656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.287 [2024-12-08 05:17:26.417127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:25.288 [2024-12-08 05:17:26.417163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.288 [2024-12-08 05:17:26.417191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:25.288 [2024-12-08 05:17:26.417227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:24672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.288 [2024-12-08 05:17:26.417253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:25.288 [2024-12-08 05:17:26.417288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:24680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.288 [2024-12-08 05:17:26.417315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:25.288 [2024-12-08 05:17:26.417356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.288 [2024-12-08 05:17:26.417383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:25.288 [2024-12-08 05:17:26.417418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.288 [2024-12-08 05:17:26.417444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:25.288 [2024-12-08 05:17:26.417479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:24088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.288 [2024-12-08 05:17:26.417504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:25.288 [2024-12-08 05:17:26.417537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:24096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.288 [2024-12-08 05:17:26.417564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:25.288 [2024-12-08 05:17:26.417598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:24104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.288 [2024-12-08 05:17:26.417624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:25.288 [2024-12-08 05:17:26.417657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:24128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.288 [2024-12-08 05:17:26.417707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:25.288 [2024-12-08 05:17:26.417743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:24144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.288 [2024-12-08 05:17:26.417784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:25.288 [2024-12-08 05:17:26.417820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:24160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.288 [2024-12-08 05:17:26.417848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.288 [2024-12-08 05:17:26.417882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:24192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.288 [2024-12-08 05:17:26.417907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:25.288 [2024-12-08 05:17:26.417941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.288 [2024-12-08 05:17:26.417966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:25.288 [2024-12-08 05:17:26.418001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:24704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.288 [2024-12-08 05:17:26.418027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:25.288 [2024-12-08 05:17:26.418061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:24712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.288 [2024-12-08 05:17:26.418088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:25.288 [2024-12-08 05:17:26.418122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:24720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.288 [2024-12-08 05:17:26.418149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:25.288 [2024-12-08 05:17:26.418184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:24728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.288 [2024-12-08 05:17:26.418211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:25.288 [2024-12-08 05:17:26.418245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.288 [2024-12-08 05:17:26.418272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:25.288 [2024-12-08 05:17:26.418307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:24744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.288 [2024-12-08 05:17:26.418333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:25.288 [2024-12-08 05:17:26.418367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.288 [2024-12-08 05:17:26.418393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:25.288 [2024-12-08 05:17:26.418426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.288 [2024-12-08 05:17:26.418452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:25.288 [2024-12-08 05:17:26.418485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.288 [2024-12-08 05:17:26.418511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:25.288 [2024-12-08 05:17:26.418559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:24776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.288 [2024-12-08 05:17:26.418587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:25.288 [2024-12-08 05:17:26.418623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.288 [2024-12-08 05:17:26.418649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:25.288 [2024-12-08 05:17:26.418702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.288 [2024-12-08 05:17:26.418732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:25.288 [2024-12-08 05:17:26.418766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:24800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.288 [2024-12-08 05:17:26.418792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:25.288 [2024-12-08 05:17:26.418827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.288 [2024-12-08 05:17:26.418852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:25.288 [2024-12-08 05:17:26.418887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:24816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.288 [2024-12-08 05:17:26.418913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:25.288 [2024-12-08 05:17:26.418950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.288 [2024-12-08 05:17:26.418977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:25.288 [2024-12-08 05:17:26.419013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:24832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.288 [2024-12-08 05:17:26.419040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:25.288 [2024-12-08 05:17:26.419077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:24840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.288 [2024-12-08 05:17:26.419106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:25.288 [2024-12-08 05:17:26.419142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.288 [2024-12-08 05:17:26.419169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:25.288 [2024-12-08 05:17:26.419205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:24856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.288 [2024-12-08 05:17:26.419232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:25.288 [2024-12-08 05:17:26.419268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:24864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.289 [2024-12-08 05:17:26.419296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:25.289 [2024-12-08 05:17:26.419347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:24872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.289 [2024-12-08 05:17:26.419391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:25.289 [2024-12-08 05:17:26.419429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.289 [2024-12-08 05:17:26.419457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:25.289 [2024-12-08 05:17:26.419494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:24272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.289 [2024-12-08 05:17:26.419521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:25.289 [2024-12-08 05:17:26.419558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.289 [2024-12-08 05:17:26.419584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:25.289 [2024-12-08 05:17:26.419621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:24304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.289 [2024-12-08 05:17:26.419648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:25.289 [2024-12-08 05:17:26.419701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.289 [2024-12-08 05:17:26.419732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:25.289 [2024-12-08 05:17:26.419768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.289 [2024-12-08 05:17:26.419795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:25.289 [2024-12-08 05:17:26.421532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.289 [2024-12-08 05:17:26.421584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:25.289 [2024-12-08 05:17:26.421633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.289 [2024-12-08 05:17:26.421663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.289 [2024-12-08 05:17:26.421728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:24880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.289 [2024-12-08 05:17:26.421759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:25.289 [2024-12-08 05:17:26.421797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.289 [2024-12-08 05:17:26.421825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:25.289 [2024-12-08 05:17:26.421864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.289 [2024-12-08 05:17:26.421893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:25.289 [2024-12-08 05:17:26.421932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.289 [2024-12-08 05:17:26.421981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:25.289 [2024-12-08 05:17:26.422020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.289 [2024-12-08 05:17:26.422048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:25.289 [2024-12-08 05:17:26.422087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.289 [2024-12-08 05:17:26.422116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:25.289 [2024-12-08 05:17:26.422155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:24928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.289 [2024-12-08 05:17:26.422185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:25.289 [2024-12-08 05:17:26.422223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.289 [2024-12-08 05:17:26.422251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:25.289 [2024-12-08 05:17:26.422291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:24944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.289 [2024-12-08 05:17:26.422319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:25.289 [2024-12-08 05:17:26.422381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:24952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.289 [2024-12-08 05:17:26.422412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:25.289 [2024-12-08 05:17:26.422448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.289 [2024-12-08 05:17:26.422477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:25.289 [2024-12-08 05:17:26.422513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:24968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.289 [2024-12-08 05:17:26.422540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:25.289 [2024-12-08 05:17:26.422576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.289 [2024-12-08 05:17:26.422603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:25.289 [2024-12-08 05:17:26.422639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:24984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.289 [2024-12-08 05:17:26.422666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:25.289 [2024-12-08 05:17:26.422723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.289 [2024-12-08 05:17:26.422752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:25.289 [2024-12-08 05:17:26.422790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.289 [2024-12-08 05:17:26.422834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:25.289 [2024-12-08 05:17:26.422873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:25008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.289 [2024-12-08 05:17:26.422900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:25.289 [2024-12-08 05:17:26.422936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.289 [2024-12-08 05:17:26.422962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:25.289 [2024-12-08 05:17:26.422997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:25024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.289 [2024-12-08 05:17:26.423025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:25.289 [2024-12-08 05:17:33.088541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:106720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.289 [2024-12-08 05:17:33.088655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:25.289 [2024-12-08 05:17:33.088776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:106728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.289 [2024-12-08 05:17:33.088815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:25.289 [2024-12-08 05:17:33.088855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:106736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.289 [2024-12-08 05:17:33.088883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:25.289 [2024-12-08 05:17:33.088921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:106744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.289 [2024-12-08 05:17:33.088949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:25.289 [2024-12-08 05:17:33.088985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:106752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.289 [2024-12-08 05:17:33.089015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:25.289 [2024-12-08 05:17:33.089052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:106760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.289 [2024-12-08 05:17:33.089080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:25.289 [2024-12-08 05:17:33.089117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:106768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.289 [2024-12-08 05:17:33.089146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:25.289 [2024-12-08 05:17:33.089182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:106776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.289 [2024-12-08 05:17:33.089213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:25.289 [2024-12-08 05:17:33.089254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:106784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.289 [2024-12-08 05:17:33.089320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:25.289 [2024-12-08 05:17:33.089364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:106792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.290 [2024-12-08 05:17:33.089394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:25.290 [2024-12-08 05:17:33.089435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:106800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.290 [2024-12-08 05:17:33.089465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:25.290 [2024-12-08 05:17:33.089503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:106808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.290 [2024-12-08 05:17:33.089532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:25.290 [2024-12-08 05:17:33.089570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:106816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.290 [2024-12-08 05:17:33.089602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:25.290 [2024-12-08 05:17:33.089642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:106824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.290 [2024-12-08 05:17:33.089698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:25.290 [2024-12-08 05:17:33.089745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:106832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.290 [2024-12-08 05:17:33.089774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:25.290 [2024-12-08 05:17:33.089815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:106840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.290 [2024-12-08 05:17:33.089848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:25.290 [2024-12-08 05:17:33.089888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:106848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.290 [2024-12-08 05:17:33.089918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:25.290 [2024-12-08 05:17:33.089957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:106152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.290 [2024-12-08 05:17:33.089986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:25.290 [2024-12-08 05:17:33.090026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.290 [2024-12-08 05:17:33.090056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:25.290 [2024-12-08 05:17:33.090095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:106184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.290 [2024-12-08 05:17:33.090124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:25.290 [2024-12-08 05:17:33.090164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:106192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.290 [2024-12-08 05:17:33.090194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:25.290 [2024-12-08 05:17:33.090254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:106200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.290 [2024-12-08 05:17:33.090285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:25.290 [2024-12-08 05:17:33.090324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:106224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.290 [2024-12-08 05:17:33.090354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:25.290 [2024-12-08 05:17:33.090394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:106232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.290 [2024-12-08 05:17:33.090423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:25.290 [2024-12-08 05:17:33.090465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:106264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.290 [2024-12-08 05:17:33.090494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:25.290 [2024-12-08 05:17:33.090534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:106856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.290 [2024-12-08 05:17:33.090566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.290 [2024-12-08 05:17:33.090606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:106864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.290 [2024-12-08 05:17:33.090635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:25.290 [2024-12-08 05:17:33.090697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:106872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.290 [2024-12-08 05:17:33.090733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:25.290 [2024-12-08 05:17:33.090775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:106880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.290 [2024-12-08 05:17:33.090803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:25.290 [2024-12-08 05:17:33.090838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:106888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.290 [2024-12-08 05:17:33.090867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:25.290 [2024-12-08 05:17:33.090917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.290 [2024-12-08 05:17:33.090949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:25.290 [2024-12-08 05:17:33.090988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:106904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.290 [2024-12-08 05:17:33.091017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:25.290 [2024-12-08 05:17:33.091060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:106912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.290 [2024-12-08 05:17:33.091090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:25.290 [2024-12-08 05:17:33.091150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:106920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.290 [2024-12-08 05:17:33.091182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:25.290 [2024-12-08 05:17:33.091221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:106928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.290 [2024-12-08 05:17:33.091253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:25.290 [2024-12-08 05:17:33.091292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:106936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.290 [2024-12-08 05:17:33.091321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:25.290 [2024-12-08 05:17:33.091360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:106944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.290 [2024-12-08 05:17:33.091403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:25.290 [2024-12-08 05:17:33.091447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:106952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.290 [2024-12-08 05:17:33.091477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:25.290 [2024-12-08 05:17:33.091515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:106960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.290 [2024-12-08 05:17:33.091547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:25.290 [2024-12-08 05:17:33.091587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:106968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.290 [2024-12-08 05:17:33.091618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:25.290 [2024-12-08 05:17:33.091657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:106272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.290 [2024-12-08 05:17:33.091710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:25.290 [2024-12-08 05:17:33.091755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.290 [2024-12-08 05:17:33.091783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:25.291 [2024-12-08 05:17:33.091828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.291 [2024-12-08 05:17:33.091861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:25.291 [2024-12-08 05:17:33.091902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:106320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.291 [2024-12-08 05:17:33.091931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:25.291 [2024-12-08 05:17:33.091973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:106328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.291 [2024-12-08 05:17:33.092000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:25.291 [2024-12-08 05:17:33.092041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:106336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.291 [2024-12-08 05:17:33.092094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:25.291 [2024-12-08 05:17:33.092137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:106352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.291 [2024-12-08 05:17:33.092169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:25.291 [2024-12-08 05:17:33.092208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:106360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.291 [2024-12-08 05:17:33.092238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:25.291 [2024-12-08 05:17:33.092279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:106976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.291 [2024-12-08 05:17:33.092308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:25.291 [2024-12-08 05:17:33.092348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:106984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.291 [2024-12-08 05:17:33.092378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:25.291 [2024-12-08 05:17:33.092420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:106992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.291 [2024-12-08 05:17:33.092450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:25.291 [2024-12-08 05:17:33.092492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:107000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.291 [2024-12-08 05:17:33.092521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:25.291 [2024-12-08 05:17:33.092559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:107008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.291 [2024-12-08 05:17:33.092590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:25.291 [2024-12-08 05:17:33.092632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:107016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.291 [2024-12-08 05:17:33.092661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:25.291 [2024-12-08 05:17:33.092730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:107024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.291 [2024-12-08 05:17:33.092761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:25.291 [2024-12-08 05:17:33.092802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:107032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.291 [2024-12-08 05:17:33.092835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:25.291 [2024-12-08 05:17:33.092875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:107040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.291 [2024-12-08 05:17:33.092904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:25.291 [2024-12-08 05:17:33.092944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:107048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.291 [2024-12-08 05:17:33.092991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.291 [2024-12-08 05:17:33.093033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:107056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.291 [2024-12-08 05:17:33.093066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:25.291 [2024-12-08 05:17:33.093108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:107064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.291 [2024-12-08 05:17:33.093137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:25.291 [2024-12-08 05:17:33.093177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:107072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.291 [2024-12-08 05:17:33.093206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:25.291 [2024-12-08 05:17:33.093249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:106376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.291 [2024-12-08 05:17:33.093282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:25.291 [2024-12-08 05:17:33.093323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:106400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.291 [2024-12-08 05:17:33.093353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:25.291 [2024-12-08 05:17:33.093392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:106424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.291 [2024-12-08 05:17:33.093422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:25.291 [2024-12-08 05:17:33.093462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:106440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.291 [2024-12-08 05:17:33.093491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:25.291 [2024-12-08 05:17:33.093530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:106448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.291 [2024-12-08 05:17:33.093560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:25.291 [2024-12-08 05:17:33.093600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:106472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.291 [2024-12-08 05:17:33.093631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:25.291 [2024-12-08 05:17:33.093693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:106504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.291 [2024-12-08 05:17:33.093728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:25.291 [2024-12-08 05:17:33.093769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:106520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.291 [2024-12-08 05:17:33.093799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:25.291 [2024-12-08 05:17:33.093842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:107080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.291 [2024-12-08 05:17:33.093884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:25.291 [2024-12-08 05:17:33.093930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:107088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.291 [2024-12-08 05:17:33.093959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:25.291 [2024-12-08 05:17:33.094000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:107096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.291 [2024-12-08 05:17:33.094030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:25.291 [2024-12-08 05:17:33.094069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:107104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.291 [2024-12-08 05:17:33.094099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:25.291 [2024-12-08 05:17:33.094141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:107112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.291 [2024-12-08 05:17:33.094170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:25.291 [2024-12-08 05:17:33.094215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:107120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.292 [2024-12-08 05:17:33.094241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:25.292 [2024-12-08 05:17:33.094283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:107128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.292 [2024-12-08 05:17:33.094315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:25.292 [2024-12-08 05:17:33.094356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:107136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.292 [2024-12-08 05:17:33.094386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:25.292 [2024-12-08 05:17:33.094425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:107144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.292 [2024-12-08 05:17:33.094456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:25.292 [2024-12-08 05:17:33.094494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:107152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.292 [2024-12-08 05:17:33.094531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:25.292 [2024-12-08 05:17:33.094573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:107160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.292 [2024-12-08 05:17:33.094603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:25.292 [2024-12-08 05:17:33.094644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:107168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.292 [2024-12-08 05:17:33.094695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:25.292 [2024-12-08 05:17:33.094740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:107176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.292 [2024-12-08 05:17:33.094770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:25.292 [2024-12-08 05:17:33.094853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:107184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.292 [2024-12-08 05:17:33.094889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:25.292 [2024-12-08 05:17:33.094931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.292 [2024-12-08 05:17:33.094959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:25.292 [2024-12-08 05:17:33.094997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:107200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.292 [2024-12-08 05:17:33.095024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:25.292 [2024-12-08 05:17:33.095065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:107208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.292 [2024-12-08 05:17:33.095095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:25.292 [2024-12-08 05:17:33.095134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:107216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.292 [2024-12-08 05:17:33.095162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:25.292 [2024-12-08 05:17:33.095201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:107224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.292 [2024-12-08 05:17:33.095228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.292 [2024-12-08 05:17:33.095265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:107232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.292 [2024-12-08 05:17:33.095292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:25.292 [2024-12-08 05:17:33.095329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:106536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.292 [2024-12-08 05:17:33.095357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.292 [2024-12-08 05:17:33.095410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:106552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.292 [2024-12-08 05:17:33.095437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:25.292 [2024-12-08 05:17:33.095473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:106568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.292 [2024-12-08 05:17:33.095503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:25.292 [2024-12-08 05:17:33.095540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:106576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.292 [2024-12-08 05:17:33.095566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:25.292 [2024-12-08 05:17:33.095601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:106592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.292 [2024-12-08 05:17:33.095629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:25.292 [2024-12-08 05:17:33.095704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:106600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.292 [2024-12-08 05:17:33.095737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:25.292 [2024-12-08 05:17:33.095773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:106608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.292 [2024-12-08 05:17:33.095800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:25.292 [2024-12-08 05:17:33.095837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:106624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.292 [2024-12-08 05:17:33.095864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:25.292 [2024-12-08 05:17:33.095901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:107240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.292 [2024-12-08 05:17:33.095928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:25.292 [2024-12-08 05:17:33.095965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:107248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.292 [2024-12-08 05:17:33.095990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:25.292 [2024-12-08 05:17:33.096023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:107256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.292 [2024-12-08 05:17:33.096048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:25.292 [2024-12-08 05:17:33.096082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:107264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.292 [2024-12-08 05:17:33.096107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:25.292 [2024-12-08 05:17:33.096140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:107272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.292 [2024-12-08 05:17:33.096165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:25.292 [2024-12-08 05:17:33.096198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:107280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.292 [2024-12-08 05:17:33.096224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:25.292 [2024-12-08 05:17:33.096257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:107288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.292 [2024-12-08 05:17:33.096281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:25.292 [2024-12-08 05:17:33.096317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:107296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.292 [2024-12-08 05:17:33.096346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:25.292 [2024-12-08 05:17:33.096382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:107304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.292 [2024-12-08 05:17:33.096408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:25.292 [2024-12-08 05:17:33.096443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:107312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.292 [2024-12-08 05:17:33.096484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:25.292 [2024-12-08 05:17:33.096522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:107320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.292 [2024-12-08 05:17:33.096549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:25.292 [2024-12-08 05:17:33.096585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:107328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.292 [2024-12-08 05:17:33.096611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:25.292 [2024-12-08 05:17:33.096646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:107336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.292 [2024-12-08 05:17:33.096693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:25.292 [2024-12-08 05:17:33.096735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:107344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.292 [2024-12-08 05:17:33.096763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:25.292 [2024-12-08 05:17:33.096800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:106648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.292 [2024-12-08 05:17:33.096827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:25.292 [2024-12-08 05:17:33.096862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:106656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.293 [2024-12-08 05:17:33.096888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:25.293 [2024-12-08 05:17:33.096923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:106664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.293 [2024-12-08 05:17:33.096948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:25.293 [2024-12-08 05:17:33.096984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:106672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.293 [2024-12-08 05:17:33.097009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:25.293 [2024-12-08 05:17:33.097043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:106680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.293 [2024-12-08 05:17:33.097069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:25.293 [2024-12-08 05:17:33.097104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:106688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.293 [2024-12-08 05:17:33.097130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:25.293 [2024-12-08 05:17:33.097165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:106696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.293 [2024-12-08 05:17:33.097190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:25.293 [2024-12-08 05:17:33.098209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:106712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.293 [2024-12-08 05:17:33.098270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:25.293 [2024-12-08 05:17:33.098328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:107352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.293 [2024-12-08 05:17:33.098357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:25.293 [2024-12-08 05:17:33.098406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:107360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.293 [2024-12-08 05:17:33.098434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:25.293 [2024-12-08 05:17:33.098481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:107368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.293 [2024-12-08 05:17:33.098507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.293 [2024-12-08 05:17:33.098553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:107376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.293 [2024-12-08 05:17:33.098581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:25.293 [2024-12-08 05:17:33.098627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:107384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.293 [2024-12-08 05:17:33.098653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:25.293 [2024-12-08 05:17:33.098721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:107392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.293 [2024-12-08 05:17:33.098752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:25.293 [2024-12-08 05:17:33.098799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:107400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.293 [2024-12-08 05:17:33.098831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:25.293 [2024-12-08 05:17:33.098878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:107408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.293 [2024-12-08 05:17:33.098904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:25.293 [2024-12-08 05:17:33.098952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:107416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.293 [2024-12-08 05:17:33.098979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:25.293 [2024-12-08 05:17:40.267524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:31224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.293 [2024-12-08 05:17:40.267622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:25.293 [2024-12-08 05:17:40.267712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:31232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.293 [2024-12-08 05:17:40.267738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:25.293 [2024-12-08 05:17:40.267763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:31240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.293 [2024-12-08 05:17:40.267801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:25.293 [2024-12-08 05:17:40.267827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:31248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.293 [2024-12-08 05:17:40.267842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:25.293 [2024-12-08 05:17:40.267865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:31256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.293 [2024-12-08 05:17:40.267879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:25.293 [2024-12-08 05:17:40.267901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:31264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.293 [2024-12-08 05:17:40.267916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:25.293 [2024-12-08 05:17:40.267939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:31272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.293 [2024-12-08 05:17:40.267954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:25.293 [2024-12-08 05:17:40.267982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:31280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.293 [2024-12-08 05:17:40.267998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:25.293 [2024-12-08 05:17:40.268019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:31288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.293 [2024-12-08 05:17:40.268034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:25.293 [2024-12-08 05:17:40.268056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:31296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.293 [2024-12-08 05:17:40.268071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:25.293 [2024-12-08 05:17:40.268092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:31304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.293 [2024-12-08 05:17:40.268107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.293 [2024-12-08 05:17:40.268128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:31312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.293 [2024-12-08 05:17:40.268143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:25.293 [2024-12-08 05:17:40.268626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:31320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.293 [2024-12-08 05:17:40.268647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:25.293 [2024-12-08 05:17:40.268689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:30624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.293 [2024-12-08 05:17:40.268721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:25.293 [2024-12-08 05:17:40.268748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:30632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.293 [2024-12-08 05:17:40.268765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:25.293 [2024-12-08 05:17:40.268802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.293 [2024-12-08 05:17:40.268828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:25.293 [2024-12-08 05:17:40.268865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:30648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.293 [2024-12-08 05:17:40.268885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:25.293 [2024-12-08 05:17:40.268919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:30664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.293 [2024-12-08 05:17:40.268948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:25.293 [2024-12-08 05:17:40.268975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:30680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.293 [2024-12-08 05:17:40.268990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:25.293 [2024-12-08 05:17:40.269014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:30696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.293 [2024-12-08 05:17:40.269029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:25.293 [2024-12-08 05:17:40.269053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:30712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.293 [2024-12-08 05:17:40.269068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:25.293 [2024-12-08 05:17:40.269091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:31328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.293 [2024-12-08 05:17:40.269106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:25.293 [2024-12-08 05:17:40.269129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:31336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.294 [2024-12-08 05:17:40.269144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:25.294 [2024-12-08 05:17:40.269167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:31344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.294 [2024-12-08 05:17:40.269182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:25.294 [2024-12-08 05:17:40.269205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:31352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.294 [2024-12-08 05:17:40.269220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:25.294 [2024-12-08 05:17:40.269245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:31360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.294 [2024-12-08 05:17:40.269261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:25.294 [2024-12-08 05:17:40.269284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.294 [2024-12-08 05:17:40.269299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:25.294 [2024-12-08 05:17:40.269332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:31376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.294 [2024-12-08 05:17:40.269350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:25.294 [2024-12-08 05:17:40.269387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.294 [2024-12-08 05:17:40.269414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:25.294 [2024-12-08 05:17:40.269439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:31392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.294 [2024-12-08 05:17:40.269455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:25.294 [2024-12-08 05:17:40.269479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:31400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.294 [2024-12-08 05:17:40.269495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:25.294 [2024-12-08 05:17:40.269518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:31408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.294 [2024-12-08 05:17:40.269534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:25.294 [2024-12-08 05:17:40.269557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:31416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.294 [2024-12-08 05:17:40.269572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:25.294 [2024-12-08 05:17:40.269595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:31424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.294 [2024-12-08 05:17:40.269610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:25.294 [2024-12-08 05:17:40.269633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:30728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.294 [2024-12-08 05:17:40.269648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:25.294 [2024-12-08 05:17:40.269684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.294 [2024-12-08 05:17:40.269703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:25.294 [2024-12-08 05:17:40.269727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.294 [2024-12-08 05:17:40.269743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:25.294 [2024-12-08 05:17:40.269766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:30784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.294 [2024-12-08 05:17:40.269781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:25.294 [2024-12-08 05:17:40.269804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:30800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.294 [2024-12-08 05:17:40.269820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:25.294 [2024-12-08 05:17:40.269843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:30816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.294 [2024-12-08 05:17:40.269868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:25.294 [2024-12-08 05:17:40.269892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:30840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.294 [2024-12-08 05:17:40.269908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.294 [2024-12-08 05:17:40.269931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:30856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.294 [2024-12-08 05:17:40.269947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:25.294 [2024-12-08 05:17:40.269975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:31432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.294 [2024-12-08 05:17:40.269991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.294 [2024-12-08 05:17:40.270014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:31440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.294 [2024-12-08 05:17:40.270029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:25.294 [2024-12-08 05:17:40.270052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:31448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.294 [2024-12-08 05:17:40.270068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:25.294 [2024-12-08 05:17:40.270091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:31456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.294 [2024-12-08 05:17:40.270106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:25.294 [2024-12-08 05:17:40.270129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:31464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.294 [2024-12-08 05:17:40.270145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:25.294 [2024-12-08 05:17:40.270168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.294 [2024-12-08 05:17:40.270184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:25.294 [2024-12-08 05:17:40.270207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:31480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.294 [2024-12-08 05:17:40.270222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:25.294 [2024-12-08 05:17:40.270245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:31488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.294 [2024-12-08 05:17:40.270261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:25.294 [2024-12-08 05:17:40.270283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:31496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.294 [2024-12-08 05:17:40.270298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:25.295 [2024-12-08 05:17:40.270321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.295 [2024-12-08 05:17:40.270344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:25.295 [2024-12-08 05:17:40.270368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:31512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.295 [2024-12-08 05:17:40.270384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:25.295 [2024-12-08 05:17:40.270407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:31520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.295 [2024-12-08 05:17:40.270422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:25.295 [2024-12-08 05:17:40.270448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:31528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.295 [2024-12-08 05:17:40.270477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:25.295 [2024-12-08 05:17:40.270513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:31536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.295 [2024-12-08 05:17:40.270529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:25.295 [2024-12-08 05:17:40.270553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:31544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.295 [2024-12-08 05:17:40.270568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:25.295 [2024-12-08 05:17:40.270594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:31552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.295 [2024-12-08 05:17:40.270620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:25.295 [2024-12-08 05:17:40.270650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:31560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.295 [2024-12-08 05:17:40.270669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:25.295 [2024-12-08 05:17:40.270726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:31568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.295 [2024-12-08 05:17:40.270751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:25.295 [2024-12-08 05:17:40.270775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.295 [2024-12-08 05:17:40.270791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:25.295 [2024-12-08 05:17:40.270814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:30880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.295 [2024-12-08 05:17:40.270831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:25.295 [2024-12-08 05:17:40.270854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:30912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.295 [2024-12-08 05:17:40.270869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:25.295 [2024-12-08 05:17:40.270891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:30920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.295 [2024-12-08 05:17:40.270906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:25.295 [2024-12-08 05:17:40.270941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:30936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.295 [2024-12-08 05:17:40.270958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:25.295 [2024-12-08 05:17:40.270980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:30952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.295 [2024-12-08 05:17:40.270995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:25.295 [2024-12-08 05:17:40.271018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:30960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.295 [2024-12-08 05:17:40.271033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:25.295 [2024-12-08 05:17:40.271056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:30976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.295 [2024-12-08 05:17:40.271071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:25.295 [2024-12-08 05:17:40.271093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:31576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.295 [2024-12-08 05:17:40.271108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:25.295 [2024-12-08 05:17:40.271131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:31584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.295 [2024-12-08 05:17:40.271146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:25.295 [2024-12-08 05:17:40.271169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:31592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.295 [2024-12-08 05:17:40.271193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:25.295 [2024-12-08 05:17:40.271231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:31600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.295 [2024-12-08 05:17:40.271250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:25.295 [2024-12-08 05:17:40.271275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.295 [2024-12-08 05:17:40.271294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:25.295 [2024-12-08 05:17:40.271318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:31616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.295 [2024-12-08 05:17:40.271333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:25.295 [2024-12-08 05:17:40.271356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:31624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.295 [2024-12-08 05:17:40.271385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.295 [2024-12-08 05:17:40.271412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:31632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.295 [2024-12-08 05:17:40.271427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:25.295 [2024-12-08 05:17:40.271461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:31640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.295 [2024-12-08 05:17:40.271477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:25.295 [2024-12-08 05:17:40.271500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:31648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.295 [2024-12-08 05:17:40.271516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:25.295 [2024-12-08 05:17:40.271539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.295 [2024-12-08 05:17:40.271554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:25.295 [2024-12-08 05:17:40.271576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:31664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.296 [2024-12-08 05:17:40.271592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:25.296 [2024-12-08 05:17:40.271615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:31672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.296 [2024-12-08 05:17:40.271630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:25.296 [2024-12-08 05:17:40.271664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.296 [2024-12-08 05:17:40.271694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:25.296 [2024-12-08 05:17:40.271720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:30984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.296 [2024-12-08 05:17:40.271737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:25.296 [2024-12-08 05:17:40.271760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.296 [2024-12-08 05:17:40.271775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:25.296 [2024-12-08 05:17:40.271798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:31024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.296 [2024-12-08 05:17:40.271813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:25.296 [2024-12-08 05:17:40.271836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:31120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.296 [2024-12-08 05:17:40.271851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:25.296 [2024-12-08 05:17:40.271874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:31136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.296 [2024-12-08 05:17:40.271890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:25.296 [2024-12-08 05:17:40.272971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:31152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.296 [2024-12-08 05:17:40.273001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:25.296 [2024-12-08 05:17:40.273038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:31160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.296 [2024-12-08 05:17:40.273068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:25.296 [2024-12-08 05:17:40.273101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:31176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.296 [2024-12-08 05:17:40.273118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:25.296 [2024-12-08 05:17:40.273149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.296 [2024-12-08 05:17:40.273164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:25.296 [2024-12-08 05:17:40.273194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:31696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.296 [2024-12-08 05:17:40.273209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:25.296 [2024-12-08 05:17:40.273239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.296 [2024-12-08 05:17:40.273255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:25.296 [2024-12-08 05:17:40.273295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:31712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.296 [2024-12-08 05:17:40.273313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:25.296 [2024-12-08 05:17:40.273343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:31720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.296 [2024-12-08 05:17:40.273358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:25.296 [2024-12-08 05:17:40.273389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.296 [2024-12-08 05:17:40.273404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:25.296 [2024-12-08 05:17:40.273434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:31736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.296 [2024-12-08 05:17:40.273449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:25.296 [2024-12-08 05:17:40.273479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:31744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.296 [2024-12-08 05:17:40.273494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:25.296 [2024-12-08 05:17:40.273524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:31752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.296 [2024-12-08 05:17:40.273539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:25.296 [2024-12-08 05:17:40.273569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:31760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.296 [2024-12-08 05:17:40.273585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:25.296 [2024-12-08 05:17:40.273615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:31768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.296 [2024-12-08 05:17:40.273639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:25.296 [2024-12-08 05:17:40.273686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.296 [2024-12-08 05:17:40.273707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:25.297 [2024-12-08 05:17:40.273739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:31784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.297 [2024-12-08 05:17:40.273755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:25.297 [2024-12-08 05:17:40.273785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:31792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.297 [2024-12-08 05:17:40.273800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:25.297 [2024-12-08 05:17:40.273830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:31800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.297 [2024-12-08 05:17:40.273845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:25.297 [2024-12-08 05:17:40.273876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:31808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.297 [2024-12-08 05:17:40.273891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:25.297 [2024-12-08 05:17:40.273938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:31816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.297 [2024-12-08 05:17:40.273958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.297 [2024-12-08 05:17:40.273996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:31824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.297 [2024-12-08 05:17:40.274012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:25.297 [2024-12-08 05:17:40.274042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:31832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.297 [2024-12-08 05:17:40.274057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:25.297 [2024-12-08 05:17:40.274087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:31840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.297 [2024-12-08 05:17:40.274102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:25.297 [2024-12-08 05:17:40.274132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:31848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.297 [2024-12-08 05:17:40.274148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:25.297 [2024-12-08 05:17:40.274178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:31856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.297 [2024-12-08 05:17:40.274194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:25.297 [2024-12-08 05:17:40.274224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:31864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.297 [2024-12-08 05:17:40.274239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:25.297 [2024-12-08 05:17:53.826480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:119400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.297 [2024-12-08 05:17:53.826537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.297 [2024-12-08 05:17:53.826564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:119408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.297 [2024-12-08 05:17:53.826580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.297 [2024-12-08 05:17:53.826597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:119416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.297 [2024-12-08 05:17:53.826611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.297 [2024-12-08 05:17:53.826627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:119432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.297 [2024-12-08 05:17:53.826640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.297 [2024-12-08 05:17:53.826656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:119448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.297 [2024-12-08 05:17:53.826670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.297 [2024-12-08 05:17:53.826704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:119456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.297 [2024-12-08 05:17:53.826718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.297 [2024-12-08 05:17:53.826734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:119464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.297 [2024-12-08 05:17:53.826747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.297 [2024-12-08 05:17:53.826762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:119472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.297 [2024-12-08 05:17:53.826776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.297 [2024-12-08 05:17:53.826791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:120096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.297 [2024-12-08 05:17:53.826805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.297 [2024-12-08 05:17:53.826819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:120104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.297 [2024-12-08 05:17:53.826833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.297 [2024-12-08 05:17:53.826848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:120128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.297 [2024-12-08 05:17:53.826861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.297 [2024-12-08 05:17:53.826876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:120144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.297 [2024-12-08 05:17:53.826890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.297 [2024-12-08 05:17:53.826905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:120152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.297 [2024-12-08 05:17:53.826938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.297 [2024-12-08 05:17:53.826954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:120168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.297 [2024-12-08 05:17:53.826968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.297 [2024-12-08 05:17:53.826983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:120184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.297 [2024-12-08 05:17:53.826997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.297 [2024-12-08 05:17:53.827012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:119496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.297 [2024-12-08 05:17:53.827026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.297 [2024-12-08 05:17:53.827041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:119520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.297 [2024-12-08 05:17:53.827055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.297 [2024-12-08 05:17:53.827071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:119544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.297 [2024-12-08 05:17:53.827084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.298 [2024-12-08 05:17:53.827099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:119552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.298 [2024-12-08 05:17:53.827113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.298 [2024-12-08 05:17:53.827128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:119560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.298 [2024-12-08 05:17:53.827141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.298 [2024-12-08 05:17:53.827156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:119600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.298 [2024-12-08 05:17:53.827169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.298 [2024-12-08 05:17:53.827185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:119624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.298 [2024-12-08 05:17:53.827198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.298 [2024-12-08 05:17:53.827213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:119640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.298 [2024-12-08 05:17:53.827226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.298 [2024-12-08 05:17:53.827241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:120200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.298 [2024-12-08 05:17:53.827255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.298 [2024-12-08 05:17:53.827270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.298 [2024-12-08 05:17:53.827283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.298 [2024-12-08 05:17:53.827307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:120216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.298 [2024-12-08 05:17:53.827322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.298 [2024-12-08 05:17:53.827338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:120224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.298 [2024-12-08 05:17:53.827352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.298 [2024-12-08 05:17:53.827381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:120232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.298 [2024-12-08 05:17:53.827398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.298 [2024-12-08 05:17:53.827414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:120240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.298 [2024-12-08 05:17:53.827428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.298 [2024-12-08 05:17:53.827443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.298 [2024-12-08 05:17:53.827457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.298 [2024-12-08 05:17:53.827472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:120256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.298 [2024-12-08 05:17:53.827486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.298 [2024-12-08 05:17:53.827501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:120264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.298 [2024-12-08 05:17:53.827515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.298 [2024-12-08 05:17:53.827531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:120272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.298 [2024-12-08 05:17:53.827544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.298 [2024-12-08 05:17:53.827560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:120280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.298 [2024-12-08 05:17:53.827574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.298 [2024-12-08 05:17:53.827589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:120288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.298 [2024-12-08 05:17:53.827603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.298 [2024-12-08 05:17:53.827618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:120296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.298 [2024-12-08 05:17:53.827632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.298 [2024-12-08 05:17:53.827647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:120304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.298 [2024-12-08 05:17:53.827660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.298 [2024-12-08 05:17:53.827687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:120312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.298 [2024-12-08 05:17:53.827712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.298 [2024-12-08 05:17:53.827728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:120320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.298 [2024-12-08 05:17:53.827743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.298 [2024-12-08 05:17:53.827758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:120328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.298 [2024-12-08 05:17:53.827773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.298 [2024-12-08 05:17:53.827789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:120336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.298 [2024-12-08 05:17:53.827802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.298 [2024-12-08 05:17:53.827818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:120344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.298 [2024-12-08 05:17:53.827831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.298 [2024-12-08 05:17:53.827847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:120352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.298 [2024-12-08 05:17:53.827860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.298 [2024-12-08 05:17:53.827876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:119656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.298 [2024-12-08 05:17:53.827889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.298 [2024-12-08 05:17:53.827905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:119672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.298 [2024-12-08 05:17:53.827918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.298 [2024-12-08 05:17:53.827933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:119704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.298 [2024-12-08 05:17:53.827947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.298 [2024-12-08 05:17:53.827963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:119712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.298 [2024-12-08 05:17:53.827976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.298 [2024-12-08 05:17:53.827992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:119760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.298 [2024-12-08 05:17:53.828006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.299 [2024-12-08 05:17:53.828021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:119784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.299 [2024-12-08 05:17:53.828035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.299 [2024-12-08 05:17:53.828050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:119800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.299 [2024-12-08 05:17:53.828064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.299 [2024-12-08 05:17:53.828086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:119816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.299 [2024-12-08 05:17:53.828100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.299 [2024-12-08 05:17:53.828115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:120360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.299 [2024-12-08 05:17:53.828129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.299 [2024-12-08 05:17:53.828145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.299 [2024-12-08 05:17:53.828158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.299 [2024-12-08 05:17:53.828174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:120376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.299 [2024-12-08 05:17:53.828187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.299 [2024-12-08 05:17:53.828203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:120384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.299 [2024-12-08 05:17:53.828217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.299 [2024-12-08 05:17:53.828232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:120392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.299 [2024-12-08 05:17:53.828246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.299 [2024-12-08 05:17:53.828261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:120400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.299 [2024-12-08 05:17:53.828275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.299 [2024-12-08 05:17:53.828291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:120408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.299 [2024-12-08 05:17:53.828305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.299 [2024-12-08 05:17:53.828320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:120416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.299 [2024-12-08 05:17:53.828333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.299 [2024-12-08 05:17:53.828348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:120424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.299 [2024-12-08 05:17:53.828362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.299 [2024-12-08 05:17:53.828377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:120432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.299 [2024-12-08 05:17:53.828391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.299 [2024-12-08 05:17:53.828406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:120440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.299 [2024-12-08 05:17:53.828419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.299 [2024-12-08 05:17:53.828435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:120448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.299 [2024-12-08 05:17:53.828459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.299 [2024-12-08 05:17:53.828475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:120456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.299 [2024-12-08 05:17:53.828489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.299 [2024-12-08 05:17:53.828504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:120464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.299 [2024-12-08 05:17:53.828518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.299 [2024-12-08 05:17:53.828534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:120472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.299 [2024-12-08 05:17:53.828548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.299 [2024-12-08 05:17:53.828563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:120480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.299 [2024-12-08 05:17:53.828576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.299 [2024-12-08 05:17:53.828592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:120488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.299 [2024-12-08 05:17:53.828606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.299 [2024-12-08 05:17:53.828621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:120496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.299 [2024-12-08 05:17:53.828635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.299 [2024-12-08 05:17:53.828650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:120504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.299 [2024-12-08 05:17:53.828663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.299 [2024-12-08 05:17:53.828692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:119824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.299 [2024-12-08 05:17:53.828707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.299 [2024-12-08 05:17:53.828723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:119840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.299 [2024-12-08 05:17:53.828736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.299 [2024-12-08 05:17:53.828752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:119856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.299 [2024-12-08 05:17:53.828766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.299 [2024-12-08 05:17:53.828781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:119864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.299 [2024-12-08 05:17:53.828795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.299 [2024-12-08 05:17:53.828811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:119888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.299 [2024-12-08 05:17:53.828824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.299 [2024-12-08 05:17:53.828847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:119904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.299 [2024-12-08 05:17:53.828862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.299 [2024-12-08 05:17:53.828877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:119920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.299 [2024-12-08 05:17:53.828891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.299 [2024-12-08 05:17:53.828906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:119952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.299 [2024-12-08 05:17:53.828920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.300 [2024-12-08 05:17:53.828935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:120512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.300 [2024-12-08 05:17:53.828949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.300 [2024-12-08 05:17:53.828964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:120520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.300 [2024-12-08 05:17:53.828983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.300 [2024-12-08 05:17:53.828999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:120528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.300 [2024-12-08 05:17:53.829013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.300 [2024-12-08 05:17:53.829028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:120536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.300 [2024-12-08 05:17:53.829042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.300 [2024-12-08 05:17:53.829057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:120544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.300 [2024-12-08 05:17:53.829071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.300 [2024-12-08 05:17:53.829087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:120552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.300 [2024-12-08 05:17:53.829100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.300 [2024-12-08 05:17:53.829116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:120560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.300 [2024-12-08 05:17:53.829129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.300 [2024-12-08 05:17:53.829144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:120568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.300 [2024-12-08 05:17:53.829158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.300 [2024-12-08 05:17:53.829174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:120576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.300 [2024-12-08 05:17:53.829187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.300 [2024-12-08 05:17:53.829203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:120584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.300 [2024-12-08 05:17:53.829221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.300 [2024-12-08 05:17:53.829238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:120592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.300 [2024-12-08 05:17:53.829252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.300 [2024-12-08 05:17:53.829267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:120600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.300 [2024-12-08 05:17:53.829283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.300 [2024-12-08 05:17:53.829299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:120608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.300 [2024-12-08 05:17:53.829313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.300 [2024-12-08 05:17:53.829329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:120616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.300 [2024-12-08 05:17:53.829342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.300 [2024-12-08 05:17:53.829357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:120624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.300 [2024-12-08 05:17:53.829371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.300 [2024-12-08 05:17:53.829386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:120632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.300 [2024-12-08 05:17:53.829400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.300 [2024-12-08 05:17:53.829415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:120640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.300 [2024-12-08 05:17:53.829429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.300 [2024-12-08 05:17:53.829444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:120648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.300 [2024-12-08 05:17:53.829459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.300 [2024-12-08 05:17:53.829475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:119960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.300 [2024-12-08 05:17:53.829489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.300 [2024-12-08 05:17:53.829504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:119976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.300 [2024-12-08 05:17:53.829518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.300 [2024-12-08 05:17:53.829533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:119992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.300 [2024-12-08 05:17:53.829547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.300 [2024-12-08 05:17:53.829562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:120040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.300 [2024-12-08 05:17:53.829576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.300 [2024-12-08 05:17:53.829591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:120048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.300 [2024-12-08 05:17:53.829611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.300 [2024-12-08 05:17:53.829627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:120056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.300 [2024-12-08 05:17:53.829641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.300 [2024-12-08 05:17:53.829656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:120072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.300 [2024-12-08 05:17:53.829670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.300 [2024-12-08 05:17:53.829699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:120088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.300 [2024-12-08 05:17:53.829713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.300 [2024-12-08 05:17:53.829729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:120656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.300 [2024-12-08 05:17:53.829742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.300 [2024-12-08 05:17:53.829757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:120664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.300 [2024-12-08 05:17:53.829773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.300 [2024-12-08 05:17:53.829789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:120672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.300 [2024-12-08 05:17:53.829802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.300 [2024-12-08 05:17:53.829818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:120680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.300 [2024-12-08 05:17:53.829831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.301 [2024-12-08 05:17:53.829846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:120688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.301 [2024-12-08 05:17:53.829860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.301 [2024-12-08 05:17:53.829875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:120696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.301 [2024-12-08 05:17:53.829888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.301 [2024-12-08 05:17:53.829903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:120704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.301 [2024-12-08 05:17:53.829917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.301 [2024-12-08 05:17:53.829932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:120712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.301 [2024-12-08 05:17:53.829947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.301 [2024-12-08 05:17:53.829963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:120720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.301 [2024-12-08 05:17:53.829976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.301 [2024-12-08 05:17:53.829998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:120728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.301 [2024-12-08 05:17:53.830012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.301 [2024-12-08 05:17:53.830027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:120736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.301 [2024-12-08 05:17:53.830041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.301 [2024-12-08 05:17:53.830056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:120744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.301 [2024-12-08 05:17:53.830069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.301 [2024-12-08 05:17:53.830084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:120752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.301 [2024-12-08 05:17:53.830098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.301 [2024-12-08 05:17:53.830113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:120760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.301 [2024-12-08 05:17:53.830127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.301 [2024-12-08 05:17:53.830142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:120768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.301 [2024-12-08 05:17:53.830155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.301 [2024-12-08 05:17:53.830170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:120776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.301 [2024-12-08 05:17:53.830184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.301 [2024-12-08 05:17:53.830199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:120784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.301 [2024-12-08 05:17:53.830212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.301 [2024-12-08 05:17:53.830228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:120792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.301 [2024-12-08 05:17:53.830243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.301 [2024-12-08 05:17:53.830259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:120112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.301 [2024-12-08 05:17:53.830273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.301 [2024-12-08 05:17:53.830288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:120120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.301 [2024-12-08 05:17:53.830301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.301 [2024-12-08 05:17:53.830316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:120136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.301 [2024-12-08 05:17:53.830330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.301 [2024-12-08 05:17:53.830345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:120160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.301 [2024-12-08 05:17:53.830364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.301 [2024-12-08 05:17:53.830380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.301 [2024-12-08 05:17:53.830393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.301 [2024-12-08 05:17:53.830408] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1721810 is same with the state(5) to be set 00:19:25.301 [2024-12-08 05:17:53.830427] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.301 [2024-12-08 05:17:53.830438] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.301 [2024-12-08 05:17:53.830449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120192 len:8 PRP1 0x0 PRP2 0x0 00:19:25.301 [2024-12-08 05:17:53.830462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.301 [2024-12-08 05:17:53.830509] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1721810 was disconnected and freed. reset controller. 00:19:25.301 [2024-12-08 05:17:53.830619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:25.301 [2024-12-08 05:17:53.830645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.301 [2024-12-08 05:17:53.830661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:25.301 [2024-12-08 05:17:53.830692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.301 [2024-12-08 05:17:53.830710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:25.301 [2024-12-08 05:17:53.830723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.301 [2024-12-08 05:17:53.830738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:25.301 [2024-12-08 05:17:53.830751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.301 [2024-12-08 05:17:53.830764] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16aaf30 is same with the state(5) to be set 00:19:25.301 [2024-12-08 05:17:53.831846] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:25.301 [2024-12-08 05:17:53.831887] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16aaf30 (9): Bad file descriptor 00:19:25.301 [2024-12-08 05:17:53.832200] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:25.301 [2024-12-08 05:17:53.832276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:25.301 [2024-12-08 05:17:53.832329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:25.301 [2024-12-08 05:17:53.832353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16aaf30 with addr=10.0.0.2, port=4421 00:19:25.301 [2024-12-08 05:17:53.832373] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16aaf30 is same with the state(5) to be set 00:19:25.301 [2024-12-08 05:17:53.832408] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16aaf30 (9): Bad file descriptor 00:19:25.301 [2024-12-08 05:17:53.832439] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:25.302 [2024-12-08 05:17:53.832455] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:25.302 [2024-12-08 05:17:53.832482] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:25.302 [2024-12-08 05:17:53.832517] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:25.302 [2024-12-08 05:17:53.832535] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:25.302 [2024-12-08 05:18:03.893586] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:25.302 Received shutdown signal, test time was about 56.387757 seconds 00:19:25.302 00:19:25.302 Latency(us) 00:19:25.302 [2024-12-08T05:18:15.088Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.302 [2024-12-08T05:18:15.088Z] Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:25.302 Verification LBA range: start 0x0 length 0x4000 00:19:25.302 Nvme0n1 : 56.39 10007.98 39.09 0.00 0.00 12769.23 592.06 7046430.72 00:19:25.302 [2024-12-08T05:18:15.088Z] =================================================================================================================== 00:19:25.302 [2024-12-08T05:18:15.088Z] Total : 10007.98 39.09 0.00 0.00 12769.23 592.06 7046430.72 00:19:25.302 05:18:14 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:25.302 05:18:14 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:19:25.302 05:18:14 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:25.302 05:18:14 -- host/multipath.sh@125 -- # nvmftestfini 00:19:25.302 05:18:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:25.302 05:18:14 -- nvmf/common.sh@116 -- # sync 00:19:25.302 05:18:14 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:25.302 05:18:14 -- nvmf/common.sh@119 -- # set +e 00:19:25.302 05:18:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:25.302 05:18:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:25.302 rmmod nvme_tcp 00:19:25.302 rmmod nvme_fabrics 00:19:25.302 rmmod nvme_keyring 00:19:25.302 05:18:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:25.302 05:18:14 -- nvmf/common.sh@123 -- # set -e 00:19:25.302 05:18:14 -- nvmf/common.sh@124 -- # return 0 00:19:25.302 05:18:14 -- nvmf/common.sh@477 -- # '[' -n 84509 ']' 00:19:25.302 05:18:14 -- nvmf/common.sh@478 -- # killprocess 84509 00:19:25.302 05:18:14 -- common/autotest_common.sh@936 -- # '[' -z 84509 ']' 00:19:25.302 05:18:14 -- common/autotest_common.sh@940 -- # kill -0 84509 00:19:25.302 05:18:14 -- common/autotest_common.sh@941 -- # uname 00:19:25.302 05:18:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:25.302 05:18:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84509 00:19:25.302 05:18:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:25.302 05:18:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:25.302 killing process with pid 84509 00:19:25.302 05:18:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84509' 00:19:25.302 05:18:14 -- common/autotest_common.sh@955 -- # kill 84509 00:19:25.302 05:18:14 -- common/autotest_common.sh@960 -- # wait 84509 00:19:25.302 05:18:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:25.302 05:18:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:25.302 05:18:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:25.302 05:18:14 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:25.302 05:18:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:25.302 05:18:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:25.302 05:18:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:25.302 05:18:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:25.302 05:18:14 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:25.302 00:19:25.302 real 1m2.705s 00:19:25.302 user 2m54.444s 00:19:25.302 sys 0m18.874s 00:19:25.302 05:18:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:25.302 05:18:14 -- common/autotest_common.sh@10 -- # set +x 00:19:25.302 ************************************ 00:19:25.302 END TEST nvmf_multipath 00:19:25.302 ************************************ 00:19:25.302 05:18:15 -- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:25.302 05:18:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:25.302 05:18:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:25.302 05:18:15 -- common/autotest_common.sh@10 -- # set +x 00:19:25.302 ************************************ 00:19:25.302 START TEST nvmf_timeout 00:19:25.302 ************************************ 00:19:25.302 05:18:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:25.560 * Looking for test storage... 00:19:25.560 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:25.560 05:18:15 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:25.560 05:18:15 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:25.560 05:18:15 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:25.560 05:18:15 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:25.560 05:18:15 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:25.560 05:18:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:25.560 05:18:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:25.560 05:18:15 -- scripts/common.sh@335 -- # IFS=.-: 00:19:25.560 05:18:15 -- scripts/common.sh@335 -- # read -ra ver1 00:19:25.560 05:18:15 -- scripts/common.sh@336 -- # IFS=.-: 00:19:25.560 05:18:15 -- scripts/common.sh@336 -- # read -ra ver2 00:19:25.560 05:18:15 -- scripts/common.sh@337 -- # local 'op=<' 00:19:25.560 05:18:15 -- scripts/common.sh@339 -- # ver1_l=2 00:19:25.560 05:18:15 -- scripts/common.sh@340 -- # ver2_l=1 00:19:25.560 05:18:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:25.560 05:18:15 -- scripts/common.sh@343 -- # case "$op" in 00:19:25.560 05:18:15 -- scripts/common.sh@344 -- # : 1 00:19:25.560 05:18:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:25.560 05:18:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:25.560 05:18:15 -- scripts/common.sh@364 -- # decimal 1 00:19:25.560 05:18:15 -- scripts/common.sh@352 -- # local d=1 00:19:25.560 05:18:15 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:25.560 05:18:15 -- scripts/common.sh@354 -- # echo 1 00:19:25.560 05:18:15 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:25.560 05:18:15 -- scripts/common.sh@365 -- # decimal 2 00:19:25.560 05:18:15 -- scripts/common.sh@352 -- # local d=2 00:19:25.560 05:18:15 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:25.560 05:18:15 -- scripts/common.sh@354 -- # echo 2 00:19:25.560 05:18:15 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:25.560 05:18:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:25.560 05:18:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:25.560 05:18:15 -- scripts/common.sh@367 -- # return 0 00:19:25.560 05:18:15 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:25.560 05:18:15 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:25.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.560 --rc genhtml_branch_coverage=1 00:19:25.560 --rc genhtml_function_coverage=1 00:19:25.560 --rc genhtml_legend=1 00:19:25.560 --rc geninfo_all_blocks=1 00:19:25.560 --rc geninfo_unexecuted_blocks=1 00:19:25.560 00:19:25.560 ' 00:19:25.560 05:18:15 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:25.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.560 --rc genhtml_branch_coverage=1 00:19:25.560 --rc genhtml_function_coverage=1 00:19:25.560 --rc genhtml_legend=1 00:19:25.560 --rc geninfo_all_blocks=1 00:19:25.560 --rc geninfo_unexecuted_blocks=1 00:19:25.560 00:19:25.560 ' 00:19:25.560 05:18:15 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:25.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.560 --rc genhtml_branch_coverage=1 00:19:25.560 --rc genhtml_function_coverage=1 00:19:25.560 --rc genhtml_legend=1 00:19:25.560 --rc geninfo_all_blocks=1 00:19:25.560 --rc geninfo_unexecuted_blocks=1 00:19:25.560 00:19:25.560 ' 00:19:25.560 05:18:15 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:25.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.560 --rc genhtml_branch_coverage=1 00:19:25.560 --rc genhtml_function_coverage=1 00:19:25.560 --rc genhtml_legend=1 00:19:25.560 --rc geninfo_all_blocks=1 00:19:25.560 --rc geninfo_unexecuted_blocks=1 00:19:25.560 00:19:25.560 ' 00:19:25.560 05:18:15 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:25.560 05:18:15 -- nvmf/common.sh@7 -- # uname -s 00:19:25.560 05:18:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:25.560 05:18:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:25.560 05:18:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:25.560 05:18:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:25.560 05:18:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:25.560 05:18:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:25.560 05:18:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:25.560 05:18:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:25.560 05:18:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:25.560 05:18:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:25.560 05:18:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:19:25.560 05:18:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:19:25.560 05:18:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:25.560 05:18:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:25.560 05:18:15 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:25.560 05:18:15 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:25.560 05:18:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:25.560 05:18:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:25.560 05:18:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:25.560 05:18:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.560 05:18:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.560 05:18:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.560 05:18:15 -- paths/export.sh@5 -- # export PATH 00:19:25.560 05:18:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.560 05:18:15 -- nvmf/common.sh@46 -- # : 0 00:19:25.560 05:18:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:25.560 05:18:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:25.560 05:18:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:25.560 05:18:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:25.560 05:18:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:25.560 05:18:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:25.560 05:18:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:25.560 05:18:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:25.560 05:18:15 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:25.560 05:18:15 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:25.560 05:18:15 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:25.560 05:18:15 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:19:25.560 05:18:15 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:25.560 05:18:15 -- host/timeout.sh@19 -- # nvmftestinit 00:19:25.560 05:18:15 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:25.560 05:18:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:25.560 05:18:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:25.560 05:18:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:25.560 05:18:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:25.560 05:18:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:25.560 05:18:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:25.560 05:18:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:25.560 05:18:15 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:25.560 05:18:15 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:25.560 05:18:15 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:25.560 05:18:15 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:25.560 05:18:15 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:25.560 05:18:15 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:25.560 05:18:15 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:25.560 05:18:15 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:25.560 05:18:15 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:25.560 05:18:15 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:25.560 05:18:15 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:25.560 05:18:15 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:25.560 05:18:15 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:25.560 05:18:15 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:25.560 05:18:15 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:25.560 05:18:15 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:25.560 05:18:15 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:25.560 05:18:15 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:25.561 05:18:15 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:25.561 05:18:15 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:25.561 Cannot find device "nvmf_tgt_br" 00:19:25.561 05:18:15 -- nvmf/common.sh@154 -- # true 00:19:25.561 05:18:15 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:25.561 Cannot find device "nvmf_tgt_br2" 00:19:25.561 05:18:15 -- nvmf/common.sh@155 -- # true 00:19:25.561 05:18:15 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:25.561 05:18:15 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:25.561 Cannot find device "nvmf_tgt_br" 00:19:25.561 05:18:15 -- nvmf/common.sh@157 -- # true 00:19:25.561 05:18:15 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:25.561 Cannot find device "nvmf_tgt_br2" 00:19:25.561 05:18:15 -- nvmf/common.sh@158 -- # true 00:19:25.561 05:18:15 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:25.819 05:18:15 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:25.819 05:18:15 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:25.819 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:25.819 05:18:15 -- nvmf/common.sh@161 -- # true 00:19:25.819 05:18:15 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:25.819 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:25.819 05:18:15 -- nvmf/common.sh@162 -- # true 00:19:25.819 05:18:15 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:25.819 05:18:15 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:25.819 05:18:15 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:25.819 05:18:15 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:25.819 05:18:15 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:25.819 05:18:15 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:25.819 05:18:15 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:25.819 05:18:15 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:25.819 05:18:15 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:25.819 05:18:15 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:25.819 05:18:15 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:25.819 05:18:15 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:25.819 05:18:15 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:25.819 05:18:15 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:25.819 05:18:15 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:25.819 05:18:15 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:25.819 05:18:15 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:25.819 05:18:15 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:25.819 05:18:15 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:25.819 05:18:15 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:25.819 05:18:15 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:25.819 05:18:15 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:25.819 05:18:15 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:25.819 05:18:15 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:25.819 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:25.819 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.110 ms 00:19:25.819 00:19:25.819 --- 10.0.0.2 ping statistics --- 00:19:25.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:25.819 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:19:25.819 05:18:15 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:25.819 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:25.819 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:19:25.819 00:19:25.819 --- 10.0.0.3 ping statistics --- 00:19:25.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:25.819 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:19:25.819 05:18:15 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:25.819 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:25.819 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:19:25.819 00:19:25.819 --- 10.0.0.1 ping statistics --- 00:19:25.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:25.819 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:19:25.819 05:18:15 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:25.819 05:18:15 -- nvmf/common.sh@421 -- # return 0 00:19:25.819 05:18:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:25.819 05:18:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:25.819 05:18:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:25.819 05:18:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:25.819 05:18:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:25.819 05:18:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:25.819 05:18:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:25.819 05:18:15 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:19:25.819 05:18:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:25.819 05:18:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:25.819 05:18:15 -- common/autotest_common.sh@10 -- # set +x 00:19:25.819 05:18:15 -- nvmf/common.sh@469 -- # nvmfpid=85707 00:19:25.819 05:18:15 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:25.819 05:18:15 -- nvmf/common.sh@470 -- # waitforlisten 85707 00:19:25.819 05:18:15 -- common/autotest_common.sh@829 -- # '[' -z 85707 ']' 00:19:25.819 05:18:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:25.819 05:18:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:25.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:25.819 05:18:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:25.819 05:18:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:25.819 05:18:15 -- common/autotest_common.sh@10 -- # set +x 00:19:26.077 [2024-12-08 05:18:15.653095] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:26.077 [2024-12-08 05:18:15.653225] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:26.077 [2024-12-08 05:18:15.797250] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:26.077 [2024-12-08 05:18:15.833138] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:26.077 [2024-12-08 05:18:15.833279] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:26.077 [2024-12-08 05:18:15.833293] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:26.077 [2024-12-08 05:18:15.833302] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:26.077 [2024-12-08 05:18:15.834456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:26.077 [2024-12-08 05:18:15.834498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:27.059 05:18:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:27.059 05:18:16 -- common/autotest_common.sh@862 -- # return 0 00:19:27.059 05:18:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:27.059 05:18:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:27.059 05:18:16 -- common/autotest_common.sh@10 -- # set +x 00:19:27.059 05:18:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:27.059 05:18:16 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:27.059 05:18:16 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:27.317 [2024-12-08 05:18:16.960145] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:27.317 05:18:16 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:27.576 Malloc0 00:19:27.576 05:18:17 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:27.834 05:18:17 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:28.092 05:18:17 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:28.350 [2024-12-08 05:18:17.993217] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:28.350 05:18:18 -- host/timeout.sh@32 -- # bdevperf_pid=85756 00:19:28.350 05:18:18 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:28.350 05:18:18 -- host/timeout.sh@34 -- # waitforlisten 85756 /var/tmp/bdevperf.sock 00:19:28.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:28.350 05:18:18 -- common/autotest_common.sh@829 -- # '[' -z 85756 ']' 00:19:28.350 05:18:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:28.350 05:18:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:28.350 05:18:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:28.350 05:18:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:28.350 05:18:18 -- common/autotest_common.sh@10 -- # set +x 00:19:28.350 [2024-12-08 05:18:18.060860] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:28.350 [2024-12-08 05:18:18.060964] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85756 ] 00:19:28.608 [2024-12-08 05:18:18.199868] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.608 [2024-12-08 05:18:18.241389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:29.540 05:18:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:29.540 05:18:19 -- common/autotest_common.sh@862 -- # return 0 00:19:29.540 05:18:19 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:29.797 05:18:19 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:30.054 NVMe0n1 00:19:30.054 05:18:19 -- host/timeout.sh@51 -- # rpc_pid=85785 00:19:30.054 05:18:19 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:30.054 05:18:19 -- host/timeout.sh@53 -- # sleep 1 00:19:30.311 Running I/O for 10 seconds... 00:19:31.246 05:18:20 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:31.506 [2024-12-08 05:18:21.041683] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17be9c0 is same with the state(5) to be set 00:19:31.506 [2024-12-08 05:18:21.042348] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17be9c0 is same with the state(5) to be set 00:19:31.506 [2024-12-08 05:18:21.042458] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17be9c0 is same with the state(5) to be set 00:19:31.506 [2024-12-08 05:18:21.042538] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17be9c0 is same with the state(5) to be set 00:19:31.506 [2024-12-08 05:18:21.042602] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17be9c0 is same with the state(5) to be set 00:19:31.506 [2024-12-08 05:18:21.042663] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17be9c0 is same with the state(5) to be set 00:19:31.506 [2024-12-08 05:18:21.042782] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17be9c0 is same with the state(5) to be set 00:19:31.506 [2024-12-08 05:18:21.042850] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17be9c0 is same with the state(5) to be set 00:19:31.506 [2024-12-08 05:18:21.042913] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17be9c0 is same with the state(5) to be set 00:19:31.506 [2024-12-08 05:18:21.042993] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17be9c0 is same with the state(5) to be set 00:19:31.506 [2024-12-08 05:18:21.043056] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17be9c0 is same with the state(5) to be set 00:19:31.506 [2024-12-08 05:18:21.043123] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17be9c0 is same with the state(5) to be set 00:19:31.506 [2024-12-08 05:18:21.043185] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17be9c0 is same with the state(5) to be set 00:19:31.506 [2024-12-08 05:18:21.043250] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17be9c0 is same with the state(5) to be set 00:19:31.506 [2024-12-08 05:18:21.043328] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17be9c0 is same with the state(5) to be set 00:19:31.506 [2024-12-08 05:18:21.043423] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17be9c0 is same with the state(5) to be set 00:19:31.506 [2024-12-08 05:18:21.043492] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17be9c0 is same with the state(5) to be set 00:19:31.506 [2024-12-08 05:18:21.043575] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17be9c0 is same with the state(5) to be set 00:19:31.506 [2024-12-08 05:18:21.043637] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17be9c0 is same with the state(5) to be set 00:19:31.506 [2024-12-08 05:18:21.043745] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17be9c0 is same with the state(5) to be set 00:19:31.506 [2024-12-08 05:18:21.043813] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17be9c0 is same with the state(5) to be set 00:19:31.506 [2024-12-08 05:18:21.043889] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17be9c0 is same with the state(5) to be set 00:19:31.506 [2024-12-08 05:18:21.043970] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17be9c0 is same with the state(5) to be set 00:19:31.507 [2024-12-08 05:18:21.044043] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17be9c0 is same with the state(5) to be set 00:19:31.507 [2024-12-08 05:18:21.044129] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17be9c0 is same with the state(5) to be set 00:19:31.507 [2024-12-08 05:18:21.044196] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17be9c0 is same with the state(5) to be set 00:19:31.507 [2024-12-08 05:18:21.044275] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17be9c0 is same with the state(5) to be set 00:19:31.507 [2024-12-08 05:18:21.044351] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17be9c0 is same with the state(5) to be set 00:19:31.507 [2024-12-08 05:18:21.044417] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17be9c0 is same with the state(5) to be set 00:19:31.507 [2024-12-08 05:18:21.044475] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17be9c0 is same with the state(5) to be set 00:19:31.507 [2024-12-08 05:18:21.044497] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17be9c0 is same with the state(5) to be set 00:19:31.507 [2024-12-08 05:18:21.044507] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17be9c0 is same with the state(5) to be set 00:19:31.507 [2024-12-08 05:18:21.044580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:107568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.507 [2024-12-08 05:18:21.044612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.507 [2024-12-08 05:18:21.044637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:107584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.507 [2024-12-08 05:18:21.044647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.507 [2024-12-08 05:18:21.044660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:107600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.507 [2024-12-08 05:18:21.044670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.507 [2024-12-08 05:18:21.044697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:107608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.507 [2024-12-08 05:18:21.044707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.507 [2024-12-08 05:18:21.044718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:107616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.507 [2024-12-08 05:18:21.044729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.507 [2024-12-08 05:18:21.044740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:107624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.507 [2024-12-08 05:18:21.044750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.507 [2024-12-08 05:18:21.044761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:107656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.507 [2024-12-08 05:18:21.044770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.507 [2024-12-08 05:18:21.044782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:107680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.507 [2024-12-08 05:18:21.044791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.507 [2024-12-08 05:18:21.044802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:108208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.507 [2024-12-08 05:18:21.044811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.507 [2024-12-08 05:18:21.044822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:108224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.507 [2024-12-08 05:18:21.044831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.507 [2024-12-08 05:18:21.044843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:108248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.507 [2024-12-08 05:18:21.044852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.507 [2024-12-08 05:18:21.044863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:108256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.507 [2024-12-08 05:18:21.044873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.507 [2024-12-08 05:18:21.044885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:108264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.507 [2024-12-08 05:18:21.044895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.507 [2024-12-08 05:18:21.044906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:108280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.507 [2024-12-08 05:18:21.044916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.507 [2024-12-08 05:18:21.044927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:108288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.507 [2024-12-08 05:18:21.044936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.507 [2024-12-08 05:18:21.044949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:108304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.507 [2024-12-08 05:18:21.044959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.507 [2024-12-08 05:18:21.044970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:108312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.507 [2024-12-08 05:18:21.044980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.507 [2024-12-08 05:18:21.044992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:108320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.507 [2024-12-08 05:18:21.045001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.507 [2024-12-08 05:18:21.045012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:107688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.507 [2024-12-08 05:18:21.045022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.507 [2024-12-08 05:18:21.045033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:107704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.507 [2024-12-08 05:18:21.045042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.507 [2024-12-08 05:18:21.045054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:107736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.507 [2024-12-08 05:18:21.045063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.507 [2024-12-08 05:18:21.045074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:107760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.507 [2024-12-08 05:18:21.045083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.507 [2024-12-08 05:18:21.045095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:107768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.507 [2024-12-08 05:18:21.045104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.507 [2024-12-08 05:18:21.045116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:107800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.507 [2024-12-08 05:18:21.045125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.507 [2024-12-08 05:18:21.045137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:107808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.507 [2024-12-08 05:18:21.045146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.507 [2024-12-08 05:18:21.045157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:107816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.507 [2024-12-08 05:18:21.045166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.507 [2024-12-08 05:18:21.045178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:108328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.507 [2024-12-08 05:18:21.045187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.507 [2024-12-08 05:18:21.045198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:108336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.507 [2024-12-08 05:18:21.045208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.507 [2024-12-08 05:18:21.045219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:108344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.507 [2024-12-08 05:18:21.045228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.507 [2024-12-08 05:18:21.045240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:108352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.507 [2024-12-08 05:18:21.045249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.507 [2024-12-08 05:18:21.045260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:108360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.507 [2024-12-08 05:18:21.045269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.507 [2024-12-08 05:18:21.045281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:108368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.507 [2024-12-08 05:18:21.045290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.507 [2024-12-08 05:18:21.045302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:108376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.507 [2024-12-08 05:18:21.045313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.507 [2024-12-08 05:18:21.045324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:108384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.507 [2024-12-08 05:18:21.045333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.507 [2024-12-08 05:18:21.045345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:108392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.507 [2024-12-08 05:18:21.045354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.507 [2024-12-08 05:18:21.045365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:108400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.507 [2024-12-08 05:18:21.045374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.507 [2024-12-08 05:18:21.045386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:108408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.507 [2024-12-08 05:18:21.045395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.507 [2024-12-08 05:18:21.045406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:108416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.507 [2024-12-08 05:18:21.045416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.507 [2024-12-08 05:18:21.045427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:108424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.507 [2024-12-08 05:18:21.045436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.507 [2024-12-08 05:18:21.045448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:108432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.507 [2024-12-08 05:18:21.045457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.507 [2024-12-08 05:18:21.045469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:108440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.507 [2024-12-08 05:18:21.045478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.507 [2024-12-08 05:18:21.045489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:108448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.507 [2024-12-08 05:18:21.045498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.507 [2024-12-08 05:18:21.045510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:108456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.507 [2024-12-08 05:18:21.045519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.507 [2024-12-08 05:18:21.045530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:108464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.507 [2024-12-08 05:18:21.045539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.507 [2024-12-08 05:18:21.045551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:108472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.507 [2024-12-08 05:18:21.045560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.507 [2024-12-08 05:18:21.045571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:107840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.507 [2024-12-08 05:18:21.045581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.507 [2024-12-08 05:18:21.045592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:107864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.507 [2024-12-08 05:18:21.045601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.507 [2024-12-08 05:18:21.045613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:107880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.507 [2024-12-08 05:18:21.045622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.508 [2024-12-08 05:18:21.045634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:107912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.508 [2024-12-08 05:18:21.045643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.508 [2024-12-08 05:18:21.045654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:107920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.508 [2024-12-08 05:18:21.045663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.508 [2024-12-08 05:18:21.045693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:107928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.508 [2024-12-08 05:18:21.045705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.508 [2024-12-08 05:18:21.045718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:107944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.508 [2024-12-08 05:18:21.045728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.508 [2024-12-08 05:18:21.045739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:107952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.508 [2024-12-08 05:18:21.045748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.508 [2024-12-08 05:18:21.045760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:108480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.508 [2024-12-08 05:18:21.045769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.508 [2024-12-08 05:18:21.045781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:108488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.508 [2024-12-08 05:18:21.045790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.508 [2024-12-08 05:18:21.045801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:108496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.508 [2024-12-08 05:18:21.045810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.508 [2024-12-08 05:18:21.045822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:108504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.508 [2024-12-08 05:18:21.045831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.508 [2024-12-08 05:18:21.045842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:108512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.508 [2024-12-08 05:18:21.045851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.508 [2024-12-08 05:18:21.045863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:108520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.508 [2024-12-08 05:18:21.045872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.508 [2024-12-08 05:18:21.045883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:108528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.508 [2024-12-08 05:18:21.045892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.508 [2024-12-08 05:18:21.045904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:108536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.508 [2024-12-08 05:18:21.045913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.508 [2024-12-08 05:18:21.045925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:108544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.508 [2024-12-08 05:18:21.045934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.508 [2024-12-08 05:18:21.045946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:108552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.508 [2024-12-08 05:18:21.045955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.508 [2024-12-08 05:18:21.045967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:108560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.508 [2024-12-08 05:18:21.045976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.508 [2024-12-08 05:18:21.045987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:108568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.508 [2024-12-08 05:18:21.045997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.508 [2024-12-08 05:18:21.046008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.508 [2024-12-08 05:18:21.046017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.508 [2024-12-08 05:18:21.046029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:108584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.508 [2024-12-08 05:18:21.046038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.508 [2024-12-08 05:18:21.046049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:108592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.508 [2024-12-08 05:18:21.046059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.508 [2024-12-08 05:18:21.046070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:108600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.508 [2024-12-08 05:18:21.046079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.508 [2024-12-08 05:18:21.046090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:108608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.508 [2024-12-08 05:18:21.046099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.508 [2024-12-08 05:18:21.046110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:108616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.508 [2024-12-08 05:18:21.046120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.508 [2024-12-08 05:18:21.046131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:107976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.508 [2024-12-08 05:18:21.046141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.508 [2024-12-08 05:18:21.046152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:107984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.508 [2024-12-08 05:18:21.046161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.508 [2024-12-08 05:18:21.046174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:108000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.508 [2024-12-08 05:18:21.046183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.508 [2024-12-08 05:18:21.046195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:108008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.508 [2024-12-08 05:18:21.046204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.508 [2024-12-08 05:18:21.046215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:108016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.508 [2024-12-08 05:18:21.046224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.508 [2024-12-08 05:18:21.046235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:108024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.508 [2024-12-08 05:18:21.046244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.508 [2024-12-08 05:18:21.046255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:108080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.508 [2024-12-08 05:18:21.046264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.508 [2024-12-08 05:18:21.046280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:108088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.508 [2024-12-08 05:18:21.046290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.508 [2024-12-08 05:18:21.046302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:108624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.508 [2024-12-08 05:18:21.046311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.508 [2024-12-08 05:18:21.046322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:108632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.508 [2024-12-08 05:18:21.046332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.508 [2024-12-08 05:18:21.046343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:108640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.508 [2024-12-08 05:18:21.046352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.508 [2024-12-08 05:18:21.046363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:108648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.508 [2024-12-08 05:18:21.046373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.508 [2024-12-08 05:18:21.046385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:108656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.508 [2024-12-08 05:18:21.046394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.508 [2024-12-08 05:18:21.046405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:108664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.508 [2024-12-08 05:18:21.046415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.508 [2024-12-08 05:18:21.046426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:108672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.508 [2024-12-08 05:18:21.046435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.508 [2024-12-08 05:18:21.046447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:108680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.508 [2024-12-08 05:18:21.046456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.508 [2024-12-08 05:18:21.046468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:108688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.508 [2024-12-08 05:18:21.046477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.508 [2024-12-08 05:18:21.046488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:108696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.508 [2024-12-08 05:18:21.046497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.508 [2024-12-08 05:18:21.046508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:108704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.508 [2024-12-08 05:18:21.046517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.508 [2024-12-08 05:18:21.046529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:108712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.508 [2024-12-08 05:18:21.046538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.508 [2024-12-08 05:18:21.046549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:108720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.508 [2024-12-08 05:18:21.046558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.508 [2024-12-08 05:18:21.046570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:108728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.508 [2024-12-08 05:18:21.046579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.508 [2024-12-08 05:18:21.046590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:108736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.508 [2024-12-08 05:18:21.046599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.508 [2024-12-08 05:18:21.046613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:108744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.508 [2024-12-08 05:18:21.046622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.508 [2024-12-08 05:18:21.046634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:108752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.508 [2024-12-08 05:18:21.046643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.508 [2024-12-08 05:18:21.046654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:108760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.508 [2024-12-08 05:18:21.046663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.508 [2024-12-08 05:18:21.046684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:108768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.508 [2024-12-08 05:18:21.046696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.508 [2024-12-08 05:18:21.046707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:108776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.508 [2024-12-08 05:18:21.046717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.508 [2024-12-08 05:18:21.046728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:108104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.508 [2024-12-08 05:18:21.046738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.508 [2024-12-08 05:18:21.046749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:108112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.508 [2024-12-08 05:18:21.046759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.508 [2024-12-08 05:18:21.046770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:108120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.508 [2024-12-08 05:18:21.046779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.509 [2024-12-08 05:18:21.046790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:108136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.509 [2024-12-08 05:18:21.046800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.509 [2024-12-08 05:18:21.046811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:108144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.509 [2024-12-08 05:18:21.046820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.509 [2024-12-08 05:18:21.046832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:108160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.509 [2024-12-08 05:18:21.046841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.509 [2024-12-08 05:18:21.046852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:108168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.509 [2024-12-08 05:18:21.046861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.509 [2024-12-08 05:18:21.046872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:108184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.509 [2024-12-08 05:18:21.046881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.509 [2024-12-08 05:18:21.046892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:108784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.509 [2024-12-08 05:18:21.046902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.509 [2024-12-08 05:18:21.046913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:108792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.509 [2024-12-08 05:18:21.046922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.509 [2024-12-08 05:18:21.046934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:108800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.509 [2024-12-08 05:18:21.046943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.509 [2024-12-08 05:18:21.046956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:108808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.509 [2024-12-08 05:18:21.046966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.509 [2024-12-08 05:18:21.046977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:108816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.509 [2024-12-08 05:18:21.046986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.509 [2024-12-08 05:18:21.046998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:108824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.509 [2024-12-08 05:18:21.047007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.509 [2024-12-08 05:18:21.047019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:108832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.509 [2024-12-08 05:18:21.047028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.509 [2024-12-08 05:18:21.047039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:108840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.509 [2024-12-08 05:18:21.047049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.509 [2024-12-08 05:18:21.047060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:108848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.509 [2024-12-08 05:18:21.047069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.509 [2024-12-08 05:18:21.047081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:108856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.509 [2024-12-08 05:18:21.047090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.509 [2024-12-08 05:18:21.047101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:108864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.509 [2024-12-08 05:18:21.047110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.509 [2024-12-08 05:18:21.047122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:108872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.509 [2024-12-08 05:18:21.047131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.509 [2024-12-08 05:18:21.047142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:108880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.509 [2024-12-08 05:18:21.047151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.509 [2024-12-08 05:18:21.047163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:108888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.509 [2024-12-08 05:18:21.047171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.509 [2024-12-08 05:18:21.047183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:108896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.509 [2024-12-08 05:18:21.047192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.509 [2024-12-08 05:18:21.047204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:108200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.509 [2024-12-08 05:18:21.047213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.509 [2024-12-08 05:18:21.047224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:108216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.509 [2024-12-08 05:18:21.047233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.509 [2024-12-08 05:18:21.047245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:108232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.509 [2024-12-08 05:18:21.047254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.509 [2024-12-08 05:18:21.047265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:108240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.509 [2024-12-08 05:18:21.047274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.509 [2024-12-08 05:18:21.047289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.509 [2024-12-08 05:18:21.047298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.509 [2024-12-08 05:18:21.047308] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb39cf0 is same with the state(5) to be set 00:19:31.509 [2024-12-08 05:18:21.047322] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:31.509 [2024-12-08 05:18:21.047330] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:31.509 [2024-12-08 05:18:21.047339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108296 len:8 PRP1 0x0 PRP2 0x0 00:19:31.509 [2024-12-08 05:18:21.047348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.509 [2024-12-08 05:18:21.047408] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb39cf0 was disconnected and freed. reset controller. 00:19:31.509 [2024-12-08 05:18:21.047723] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:31.509 [2024-12-08 05:18:21.047839] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xae9c20 (9): Bad file descriptor 00:19:31.509 [2024-12-08 05:18:21.047957] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:31.509 [2024-12-08 05:18:21.048043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:31.509 [2024-12-08 05:18:21.048099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:31.509 [2024-12-08 05:18:21.048117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae9c20 with addr=10.0.0.2, port=4420 00:19:31.509 [2024-12-08 05:18:21.048128] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae9c20 is same with the state(5) to be set 00:19:31.509 [2024-12-08 05:18:21.048150] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xae9c20 (9): Bad file descriptor 00:19:31.509 [2024-12-08 05:18:21.048167] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:31.509 [2024-12-08 05:18:21.048177] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:31.509 [2024-12-08 05:18:21.048188] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:31.509 [2024-12-08 05:18:21.048210] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:31.509 [2024-12-08 05:18:21.048221] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:31.509 05:18:21 -- host/timeout.sh@56 -- # sleep 2 00:19:33.407 [2024-12-08 05:18:23.048384] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:33.407 [2024-12-08 05:18:23.048533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:33.407 [2024-12-08 05:18:23.048583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:33.407 [2024-12-08 05:18:23.048601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae9c20 with addr=10.0.0.2, port=4420 00:19:33.407 [2024-12-08 05:18:23.048617] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae9c20 is same with the state(5) to be set 00:19:33.408 [2024-12-08 05:18:23.048660] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xae9c20 (9): Bad file descriptor 00:19:33.408 [2024-12-08 05:18:23.048725] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:33.408 [2024-12-08 05:18:23.048739] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:33.408 [2024-12-08 05:18:23.048751] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:33.408 [2024-12-08 05:18:23.048779] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:33.408 [2024-12-08 05:18:23.048791] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:33.408 05:18:23 -- host/timeout.sh@57 -- # get_controller 00:19:33.408 05:18:23 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:33.408 05:18:23 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:33.666 05:18:23 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:19:33.666 05:18:23 -- host/timeout.sh@58 -- # get_bdev 00:19:33.666 05:18:23 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:33.666 05:18:23 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:33.924 05:18:23 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:19:33.924 05:18:23 -- host/timeout.sh@61 -- # sleep 5 00:19:35.299 [2024-12-08 05:18:25.048957] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:35.299 [2024-12-08 05:18:25.049072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:35.299 [2024-12-08 05:18:25.049121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:35.299 [2024-12-08 05:18:25.049138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae9c20 with addr=10.0.0.2, port=4420 00:19:35.299 [2024-12-08 05:18:25.049153] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae9c20 is same with the state(5) to be set 00:19:35.299 [2024-12-08 05:18:25.049181] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xae9c20 (9): Bad file descriptor 00:19:35.299 [2024-12-08 05:18:25.049201] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:35.299 [2024-12-08 05:18:25.049212] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:35.299 [2024-12-08 05:18:25.049222] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:35.299 [2024-12-08 05:18:25.049251] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:35.299 [2024-12-08 05:18:25.049263] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:37.848 [2024-12-08 05:18:27.049296] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:37.848 [2024-12-08 05:18:27.049371] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:37.848 [2024-12-08 05:18:27.049385] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:37.848 [2024-12-08 05:18:27.049395] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:19:37.848 [2024-12-08 05:18:27.049431] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:38.414 00:19:38.414 Latency(us) 00:19:38.414 [2024-12-08T05:18:28.200Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:38.414 [2024-12-08T05:18:28.200Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:38.414 Verification LBA range: start 0x0 length 0x4000 00:19:38.414 NVMe0n1 : 8.14 1655.90 6.47 15.72 0.00 76443.83 3008.70 7015926.69 00:19:38.415 [2024-12-08T05:18:28.201Z] =================================================================================================================== 00:19:38.415 [2024-12-08T05:18:28.201Z] Total : 1655.90 6.47 15.72 0.00 76443.83 3008.70 7015926.69 00:19:38.415 0 00:19:38.982 05:18:28 -- host/timeout.sh@62 -- # get_controller 00:19:38.982 05:18:28 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:38.982 05:18:28 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:39.240 05:18:28 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:19:39.240 05:18:28 -- host/timeout.sh@63 -- # get_bdev 00:19:39.240 05:18:28 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:39.240 05:18:28 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:39.807 05:18:29 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:19:39.807 05:18:29 -- host/timeout.sh@65 -- # wait 85785 00:19:39.807 05:18:29 -- host/timeout.sh@67 -- # killprocess 85756 00:19:39.807 05:18:29 -- common/autotest_common.sh@936 -- # '[' -z 85756 ']' 00:19:39.807 05:18:29 -- common/autotest_common.sh@940 -- # kill -0 85756 00:19:39.807 05:18:29 -- common/autotest_common.sh@941 -- # uname 00:19:39.807 05:18:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:39.807 05:18:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85756 00:19:39.807 05:18:29 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:39.807 killing process with pid 85756 00:19:39.807 05:18:29 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:39.807 05:18:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85756' 00:19:39.807 05:18:29 -- common/autotest_common.sh@955 -- # kill 85756 00:19:39.807 Received shutdown signal, test time was about 9.425118 seconds 00:19:39.807 00:19:39.807 Latency(us) 00:19:39.807 [2024-12-08T05:18:29.593Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:39.807 [2024-12-08T05:18:29.593Z] =================================================================================================================== 00:19:39.807 [2024-12-08T05:18:29.593Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:39.807 05:18:29 -- common/autotest_common.sh@960 -- # wait 85756 00:19:39.807 05:18:29 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:40.065 [2024-12-08 05:18:29.702341] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:40.065 05:18:29 -- host/timeout.sh@74 -- # bdevperf_pid=85908 00:19:40.065 05:18:29 -- host/timeout.sh@76 -- # waitforlisten 85908 /var/tmp/bdevperf.sock 00:19:40.065 05:18:29 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:40.065 05:18:29 -- common/autotest_common.sh@829 -- # '[' -z 85908 ']' 00:19:40.065 05:18:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:40.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:40.065 05:18:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:40.065 05:18:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:40.065 05:18:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:40.065 05:18:29 -- common/autotest_common.sh@10 -- # set +x 00:19:40.065 [2024-12-08 05:18:29.768407] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:40.065 [2024-12-08 05:18:29.768520] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85908 ] 00:19:40.324 [2024-12-08 05:18:29.909159] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.324 [2024-12-08 05:18:29.944626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:41.303 05:18:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:41.303 05:18:30 -- common/autotest_common.sh@862 -- # return 0 00:19:41.303 05:18:30 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:41.303 05:18:31 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:19:41.561 NVMe0n1 00:19:41.561 05:18:31 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:41.561 05:18:31 -- host/timeout.sh@84 -- # rpc_pid=85926 00:19:41.561 05:18:31 -- host/timeout.sh@86 -- # sleep 1 00:19:41.819 Running I/O for 10 seconds... 00:19:42.751 05:18:32 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:43.011 [2024-12-08 05:18:32.615642] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17be520 is same with the state(5) to be set 00:19:43.011 [2024-12-08 05:18:32.615711] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17be520 is same with the state(5) to be set 00:19:43.011 [2024-12-08 05:18:32.615723] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17be520 is same with the state(5) to be set 00:19:43.011 [2024-12-08 05:18:32.615732] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17be520 is same with the state(5) to be set 00:19:43.011 [2024-12-08 05:18:32.615741] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17be520 is same with the state(5) to be set 00:19:43.011 [2024-12-08 05:18:32.615749] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17be520 is same with the state(5) to be set 00:19:43.011 [2024-12-08 05:18:32.615758] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17be520 is same with the state(5) to be set 00:19:43.011 [2024-12-08 05:18:32.615766] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17be520 is same with the state(5) to be set 00:19:43.011 [2024-12-08 05:18:32.615774] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17be520 is same with the state(5) to be set 00:19:43.011 [2024-12-08 05:18:32.615782] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17be520 is same with the state(5) to be set 00:19:43.011 [2024-12-08 05:18:32.615790] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17be520 is same with the state(5) to be set 00:19:43.011 [2024-12-08 05:18:32.615799] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17be520 is same with the state(5) to be set 00:19:43.011 [2024-12-08 05:18:32.615807] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17be520 is same with the state(5) to be set 00:19:43.011 [2024-12-08 05:18:32.615815] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17be520 is same with the state(5) to be set 00:19:43.011 [2024-12-08 05:18:32.615823] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17be520 is same with the state(5) to be set 00:19:43.011 [2024-12-08 05:18:32.615831] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17be520 is same with the state(5) to be set 00:19:43.011 [2024-12-08 05:18:32.615840] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17be520 is same with the state(5) to be set 00:19:43.011 [2024-12-08 05:18:32.615853] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17be520 is same with the state(5) to be set 00:19:43.011 [2024-12-08 05:18:32.615861] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17be520 is same with the state(5) to be set 00:19:43.011 [2024-12-08 05:18:32.615869] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17be520 is same with the state(5) to be set 00:19:43.011 [2024-12-08 05:18:32.615877] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17be520 is same with the state(5) to be set 00:19:43.011 [2024-12-08 05:18:32.615942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:118928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.011 [2024-12-08 05:18:32.615974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.011 [2024-12-08 05:18:32.615996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:118952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.011 [2024-12-08 05:18:32.616007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.011 [2024-12-08 05:18:32.616019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:119016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.011 [2024-12-08 05:18:32.616029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.011 [2024-12-08 05:18:32.616040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:118384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.011 [2024-12-08 05:18:32.616049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.011 [2024-12-08 05:18:32.616061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:118400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.011 [2024-12-08 05:18:32.616071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.011 [2024-12-08 05:18:32.616082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:118408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.011 [2024-12-08 05:18:32.616092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.011 [2024-12-08 05:18:32.616103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:118416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.011 [2024-12-08 05:18:32.616112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.011 [2024-12-08 05:18:32.616124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:118424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.011 [2024-12-08 05:18:32.616133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.011 [2024-12-08 05:18:32.616144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:118432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.011 [2024-12-08 05:18:32.616153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.011 [2024-12-08 05:18:32.616164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:118440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.012 [2024-12-08 05:18:32.616173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.012 [2024-12-08 05:18:32.616184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:118448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.012 [2024-12-08 05:18:32.616193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.012 [2024-12-08 05:18:32.616205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:118456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.012 [2024-12-08 05:18:32.616214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.012 [2024-12-08 05:18:32.616225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:118488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.012 [2024-12-08 05:18:32.616234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.012 [2024-12-08 05:18:32.616246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:118496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.012 [2024-12-08 05:18:32.616254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.012 [2024-12-08 05:18:32.616269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:118504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.012 [2024-12-08 05:18:32.616285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.012 [2024-12-08 05:18:32.616303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:118520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.012 [2024-12-08 05:18:32.616318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.012 [2024-12-08 05:18:32.616338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:118528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.012 [2024-12-08 05:18:32.616355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.012 [2024-12-08 05:18:32.616374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:118552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.012 [2024-12-08 05:18:32.616386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.012 [2024-12-08 05:18:32.616397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:118568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.012 [2024-12-08 05:18:32.616406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.012 [2024-12-08 05:18:32.616418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:119032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:43.012 [2024-12-08 05:18:32.616427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.012 [2024-12-08 05:18:32.616438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:119040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:43.012 [2024-12-08 05:18:32.616448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.012 [2024-12-08 05:18:32.616459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:119048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.012 [2024-12-08 05:18:32.616468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.012 [2024-12-08 05:18:32.616479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:119056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:43.012 [2024-12-08 05:18:32.616489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.012 [2024-12-08 05:18:32.616500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:119064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:43.012 [2024-12-08 05:18:32.616509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.012 [2024-12-08 05:18:32.616520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:119072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:43.012 [2024-12-08 05:18:32.616529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.012 [2024-12-08 05:18:32.616540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:119080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.012 [2024-12-08 05:18:32.616549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.012 [2024-12-08 05:18:32.616560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:119088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.012 [2024-12-08 05:18:32.616569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.012 [2024-12-08 05:18:32.616581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:119096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:43.012 [2024-12-08 05:18:32.616590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.012 [2024-12-08 05:18:32.616601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:119104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.012 [2024-12-08 05:18:32.616610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.012 [2024-12-08 05:18:32.616621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:119112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.012 [2024-12-08 05:18:32.616630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.012 [2024-12-08 05:18:32.616641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:119120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:43.012 [2024-12-08 05:18:32.616651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.012 [2024-12-08 05:18:32.616662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:119128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:43.012 [2024-12-08 05:18:32.616686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.012 [2024-12-08 05:18:32.616701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:119136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:43.012 [2024-12-08 05:18:32.616712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.012 [2024-12-08 05:18:32.616723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:119144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:43.012 [2024-12-08 05:18:32.616732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.012 [2024-12-08 05:18:32.616744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:119152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.012 [2024-12-08 05:18:32.616753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.012 [2024-12-08 05:18:32.616764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:119160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.012 [2024-12-08 05:18:32.616774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.012 [2024-12-08 05:18:32.616785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:119168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:43.012 [2024-12-08 05:18:32.616794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.012 [2024-12-08 05:18:32.616806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:119176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.012 [2024-12-08 05:18:32.616815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.012 [2024-12-08 05:18:32.616826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:119184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:43.012 [2024-12-08 05:18:32.616835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.012 [2024-12-08 05:18:32.616846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:119192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.012 [2024-12-08 05:18:32.616855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.012 [2024-12-08 05:18:32.616866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:119200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:43.012 [2024-12-08 05:18:32.616875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.012 [2024-12-08 05:18:32.616886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:119208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:43.012 [2024-12-08 05:18:32.616895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.012 [2024-12-08 05:18:32.616906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:119216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:43.012 [2024-12-08 05:18:32.616915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.012 [2024-12-08 05:18:32.616926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:118584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.012 [2024-12-08 05:18:32.616936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.012 [2024-12-08 05:18:32.616947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:118624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.012 [2024-12-08 05:18:32.616956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.012 [2024-12-08 05:18:32.616967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:118632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.012 [2024-12-08 05:18:32.616976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.012 [2024-12-08 05:18:32.616987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:118640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.012 [2024-12-08 05:18:32.616996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.012 [2024-12-08 05:18:32.617007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:118648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.012 [2024-12-08 05:18:32.617016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.012 [2024-12-08 05:18:32.617028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:118672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.012 [2024-12-08 05:18:32.617037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.013 [2024-12-08 05:18:32.617049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:118680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.013 [2024-12-08 05:18:32.617058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.013 [2024-12-08 05:18:32.617069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:118696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.013 [2024-12-08 05:18:32.617078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.013 [2024-12-08 05:18:32.617090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:119224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.013 [2024-12-08 05:18:32.617099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.013 [2024-12-08 05:18:32.617110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:119232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.013 [2024-12-08 05:18:32.617119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.013 [2024-12-08 05:18:32.617130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:119240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:43.013 [2024-12-08 05:18:32.617139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.013 [2024-12-08 05:18:32.617150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:119248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:43.013 [2024-12-08 05:18:32.617159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.013 [2024-12-08 05:18:32.617171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:119256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:43.013 [2024-12-08 05:18:32.617179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.013 [2024-12-08 05:18:32.617190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:119264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.013 [2024-12-08 05:18:32.617199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.013 [2024-12-08 05:18:32.617210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:119272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.013 [2024-12-08 05:18:32.617220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.013 [2024-12-08 05:18:32.617231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:119280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:43.013 [2024-12-08 05:18:32.617240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.013 [2024-12-08 05:18:32.617251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:119288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:43.013 [2024-12-08 05:18:32.617260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.013 [2024-12-08 05:18:32.617272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:119296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.013 [2024-12-08 05:18:32.617281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.013 [2024-12-08 05:18:32.617292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:119304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:43.013 [2024-12-08 05:18:32.617301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.013 [2024-12-08 05:18:32.617312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:119312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:43.013 [2024-12-08 05:18:32.617321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.013 [2024-12-08 05:18:32.617332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:119320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:43.013 [2024-12-08 05:18:32.617341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.013 [2024-12-08 05:18:32.617352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:119328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.013 [2024-12-08 05:18:32.617362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.013 [2024-12-08 05:18:32.617373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:119336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.013 [2024-12-08 05:18:32.617383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.013 [2024-12-08 05:18:32.617394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:119344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:43.013 [2024-12-08 05:18:32.617403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.013 [2024-12-08 05:18:32.617414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:118720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.013 [2024-12-08 05:18:32.617423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.013 [2024-12-08 05:18:32.617434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:118728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.013 [2024-12-08 05:18:32.617443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.013 [2024-12-08 05:18:32.617455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:118736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.013 [2024-12-08 05:18:32.617464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.013 [2024-12-08 05:18:32.617475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:118800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.013 [2024-12-08 05:18:32.617484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.013 [2024-12-08 05:18:32.617495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:118808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.013 [2024-12-08 05:18:32.617504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.013 [2024-12-08 05:18:32.617516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:118816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.013 [2024-12-08 05:18:32.617524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.013 [2024-12-08 05:18:32.617535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:118824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.013 [2024-12-08 05:18:32.617544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.013 [2024-12-08 05:18:32.617555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.013 [2024-12-08 05:18:32.617565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.013 [2024-12-08 05:18:32.617576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:119352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.013 [2024-12-08 05:18:32.617585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.013 [2024-12-08 05:18:32.617596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:119360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.013 [2024-12-08 05:18:32.617606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.013 [2024-12-08 05:18:32.617617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:119368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.013 [2024-12-08 05:18:32.617627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.013 [2024-12-08 05:18:32.617638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:119376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.013 [2024-12-08 05:18:32.617647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.013 [2024-12-08 05:18:32.617659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:119384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.013 [2024-12-08 05:18:32.617668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.013 [2024-12-08 05:18:32.617691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:119392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.013 [2024-12-08 05:18:32.617701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.013 [2024-12-08 05:18:32.617712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:119400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:43.013 [2024-12-08 05:18:32.617722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.013 [2024-12-08 05:18:32.617733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:119408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:43.013 [2024-12-08 05:18:32.617743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.013 [2024-12-08 05:18:32.617754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:119416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.013 [2024-12-08 05:18:32.617763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.013 [2024-12-08 05:18:32.617774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:119424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.013 [2024-12-08 05:18:32.617783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.013 [2024-12-08 05:18:32.617794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:119432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:43.013 [2024-12-08 05:18:32.617803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.013 [2024-12-08 05:18:32.617815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:119440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:43.013 [2024-12-08 05:18:32.617824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.013 [2024-12-08 05:18:32.617835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:119448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.013 [2024-12-08 05:18:32.617844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.013 [2024-12-08 05:18:32.617855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:119456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:43.014 [2024-12-08 05:18:32.617864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.014 [2024-12-08 05:18:32.617875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:119464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.014 [2024-12-08 05:18:32.617884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.014 [2024-12-08 05:18:32.617896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:119472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:43.014 [2024-12-08 05:18:32.617905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.014 [2024-12-08 05:18:32.617916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:119480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.014 [2024-12-08 05:18:32.617925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.014 [2024-12-08 05:18:32.617936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:119488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:43.014 [2024-12-08 05:18:32.617945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.014 [2024-12-08 05:18:32.617957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:118856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.014 [2024-12-08 05:18:32.617966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.014 [2024-12-08 05:18:32.617977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:118880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.014 [2024-12-08 05:18:32.617986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.014 [2024-12-08 05:18:32.617997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:118888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.014 [2024-12-08 05:18:32.618007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.014 [2024-12-08 05:18:32.618018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:118912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.014 [2024-12-08 05:18:32.618027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.014 [2024-12-08 05:18:32.618038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:118936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.014 [2024-12-08 05:18:32.618047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.014 [2024-12-08 05:18:32.618058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:118944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.014 [2024-12-08 05:18:32.618067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.014 [2024-12-08 05:18:32.618078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:118960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.014 [2024-12-08 05:18:32.618087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.014 [2024-12-08 05:18:32.618098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:118968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.014 [2024-12-08 05:18:32.618112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.014 [2024-12-08 05:18:32.618123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:119496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:43.014 [2024-12-08 05:18:32.618133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.014 [2024-12-08 05:18:32.618144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:119504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:43.014 [2024-12-08 05:18:32.618153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.014 [2024-12-08 05:18:32.618164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:119512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:43.014 [2024-12-08 05:18:32.618174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.014 [2024-12-08 05:18:32.618185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:119520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:43.014 [2024-12-08 05:18:32.618194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.014 [2024-12-08 05:18:32.618205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:119528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.014 [2024-12-08 05:18:32.618214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.014 [2024-12-08 05:18:32.618225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:119536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.014 [2024-12-08 05:18:32.618234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.014 [2024-12-08 05:18:32.618246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:119544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.014 [2024-12-08 05:18:32.618255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.014 [2024-12-08 05:18:32.618266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:119552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:43.014 [2024-12-08 05:18:32.618275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.014 [2024-12-08 05:18:32.618287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:119560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.014 [2024-12-08 05:18:32.618296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.014 [2024-12-08 05:18:32.618312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:119568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.014 [2024-12-08 05:18:32.618321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.014 [2024-12-08 05:18:32.618333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:119576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:43.014 [2024-12-08 05:18:32.618344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.014 [2024-12-08 05:18:32.618355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:119584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:43.014 [2024-12-08 05:18:32.618365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.014 [2024-12-08 05:18:32.618376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:119592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:43.014 [2024-12-08 05:18:32.618385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.014 [2024-12-08 05:18:32.618396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:119600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.014 [2024-12-08 05:18:32.618405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.014 [2024-12-08 05:18:32.618417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:119608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:43.014 [2024-12-08 05:18:32.618426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.014 [2024-12-08 05:18:32.618438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:119616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:43.014 [2024-12-08 05:18:32.618454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.014 [2024-12-08 05:18:32.618472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:119624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.014 [2024-12-08 05:18:32.618489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.014 [2024-12-08 05:18:32.618506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:119632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:43.014 [2024-12-08 05:18:32.618517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.014 [2024-12-08 05:18:32.618528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:119640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.014 [2024-12-08 05:18:32.618537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.014 [2024-12-08 05:18:32.618548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:119648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.014 [2024-12-08 05:18:32.618557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.014 [2024-12-08 05:18:32.618569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:119656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.014 [2024-12-08 05:18:32.618578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.014 [2024-12-08 05:18:32.618589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:118976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.014 [2024-12-08 05:18:32.618598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.014 [2024-12-08 05:18:32.618609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:118984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.014 [2024-12-08 05:18:32.618618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.014 [2024-12-08 05:18:32.618630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:118992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.014 [2024-12-08 05:18:32.618639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.014 [2024-12-08 05:18:32.618650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:119000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.014 [2024-12-08 05:18:32.618659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.014 [2024-12-08 05:18:32.618686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:119008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.014 [2024-12-08 05:18:32.618699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.014 [2024-12-08 05:18:32.618710] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1115cf0 is same with the state(5) to be set 00:19:43.014 [2024-12-08 05:18:32.618724] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:43.014 [2024-12-08 05:18:32.618732] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:43.015 [2024-12-08 05:18:32.618741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119024 len:8 PRP1 0x0 PRP2 0x0 00:19:43.015 [2024-12-08 05:18:32.618750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.015 [2024-12-08 05:18:32.618793] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1115cf0 was disconnected and freed. reset controller. 00:19:43.015 [2024-12-08 05:18:32.619050] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:43.015 [2024-12-08 05:18:32.619140] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10c5c20 (9): Bad file descriptor 00:19:43.015 [2024-12-08 05:18:32.619242] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:43.015 [2024-12-08 05:18:32.619316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:43.015 [2024-12-08 05:18:32.619372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:43.015 [2024-12-08 05:18:32.619390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10c5c20 with addr=10.0.0.2, port=4420 00:19:43.015 [2024-12-08 05:18:32.619401] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10c5c20 is same with the state(5) to be set 00:19:43.015 [2024-12-08 05:18:32.619420] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10c5c20 (9): Bad file descriptor 00:19:43.015 [2024-12-08 05:18:32.619437] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:43.015 [2024-12-08 05:18:32.619446] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:43.015 [2024-12-08 05:18:32.619456] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:43.015 [2024-12-08 05:18:32.619477] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:43.015 [2024-12-08 05:18:32.619489] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:43.015 05:18:32 -- host/timeout.sh@90 -- # sleep 1 00:19:43.949 [2024-12-08 05:18:33.619662] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:43.949 [2024-12-08 05:18:33.619817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:43.949 [2024-12-08 05:18:33.619885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:43.949 [2024-12-08 05:18:33.619911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10c5c20 with addr=10.0.0.2, port=4420 00:19:43.949 [2024-12-08 05:18:33.619930] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10c5c20 is same with the state(5) to be set 00:19:43.949 [2024-12-08 05:18:33.619969] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10c5c20 (9): Bad file descriptor 00:19:43.949 [2024-12-08 05:18:33.619998] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:43.949 [2024-12-08 05:18:33.620014] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:43.949 [2024-12-08 05:18:33.620030] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:43.949 [2024-12-08 05:18:33.620077] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:43.949 [2024-12-08 05:18:33.620097] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:43.949 05:18:33 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:44.208 [2024-12-08 05:18:33.940664] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:44.208 05:18:33 -- host/timeout.sh@92 -- # wait 85926 00:19:45.144 [2024-12-08 05:18:34.637160] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:51.723 00:19:51.723 Latency(us) 00:19:51.723 [2024-12-08T05:18:41.509Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:51.723 [2024-12-08T05:18:41.509Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:51.723 Verification LBA range: start 0x0 length 0x4000 00:19:51.723 NVMe0n1 : 10.01 9050.84 35.35 0.00 0.00 14118.76 990.49 3019898.88 00:19:51.723 [2024-12-08T05:18:41.509Z] =================================================================================================================== 00:19:51.723 [2024-12-08T05:18:41.509Z] Total : 9050.84 35.35 0.00 0.00 14118.76 990.49 3019898.88 00:19:51.723 0 00:19:51.723 05:18:41 -- host/timeout.sh@97 -- # rpc_pid=86037 00:19:51.723 05:18:41 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:51.723 05:18:41 -- host/timeout.sh@98 -- # sleep 1 00:19:51.980 Running I/O for 10 seconds... 00:19:52.913 05:18:42 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:53.174 [2024-12-08 05:18:42.765445] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c4f10 is same with the state(5) to be set 00:19:53.174 [2024-12-08 05:18:42.765500] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c4f10 is same with the state(5) to be set 00:19:53.174 [2024-12-08 05:18:42.765512] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c4f10 is same with the state(5) to be set 00:19:53.174 [2024-12-08 05:18:42.765521] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c4f10 is same with the state(5) to be set 00:19:53.174 [2024-12-08 05:18:42.765529] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c4f10 is same with the state(5) to be set 00:19:53.174 [2024-12-08 05:18:42.765537] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c4f10 is same with the state(5) to be set 00:19:53.174 [2024-12-08 05:18:42.765546] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c4f10 is same with the state(5) to be set 00:19:53.174 [2024-12-08 05:18:42.765554] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c4f10 is same with the state(5) to be set 00:19:53.174 [2024-12-08 05:18:42.765562] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c4f10 is same with the state(5) to be set 00:19:53.174 [2024-12-08 05:18:42.765570] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c4f10 is same with the state(5) to be set 00:19:53.174 [2024-12-08 05:18:42.765579] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c4f10 is same with the state(5) to be set 00:19:53.174 [2024-12-08 05:18:42.765587] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c4f10 is same with the state(5) to be set 00:19:53.174 [2024-12-08 05:18:42.765595] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c4f10 is same with the state(5) to be set 00:19:53.174 [2024-12-08 05:18:42.765603] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c4f10 is same with the state(5) to be set 00:19:53.174 [2024-12-08 05:18:42.765611] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c4f10 is same with the state(5) to be set 00:19:53.174 [2024-12-08 05:18:42.765683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:113584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.174 [2024-12-08 05:18:42.765714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.174 [2024-12-08 05:18:42.765743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:112904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.174 [2024-12-08 05:18:42.765754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.174 [2024-12-08 05:18:42.765766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:112936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.174 [2024-12-08 05:18:42.765778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.174 [2024-12-08 05:18:42.765796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:112952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.174 [2024-12-08 05:18:42.765807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.174 [2024-12-08 05:18:42.765821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:112960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.174 [2024-12-08 05:18:42.765836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.174 [2024-12-08 05:18:42.765854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:112984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.174 [2024-12-08 05:18:42.765868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.174 [2024-12-08 05:18:42.765880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:112992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.174 [2024-12-08 05:18:42.765889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.174 [2024-12-08 05:18:42.765901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:113016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.174 [2024-12-08 05:18:42.765910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.174 [2024-12-08 05:18:42.765924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:113032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.174 [2024-12-08 05:18:42.765939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.174 [2024-12-08 05:18:42.765956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:113608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.174 [2024-12-08 05:18:42.765965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.174 [2024-12-08 05:18:42.765978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:113624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.174 [2024-12-08 05:18:42.765992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.175 [2024-12-08 05:18:42.766004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:113632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.175 [2024-12-08 05:18:42.766018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.175 [2024-12-08 05:18:42.766037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:113640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.175 [2024-12-08 05:18:42.766053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.175 [2024-12-08 05:18:42.766065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:113648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.175 [2024-12-08 05:18:42.766075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.175 [2024-12-08 05:18:42.766086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:113656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.175 [2024-12-08 05:18:42.766099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.175 [2024-12-08 05:18:42.766112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:113688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.175 [2024-12-08 05:18:42.766121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.175 [2024-12-08 05:18:42.766132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:113712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.175 [2024-12-08 05:18:42.766144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.175 [2024-12-08 05:18:42.766156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:113720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.175 [2024-12-08 05:18:42.766170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.175 [2024-12-08 05:18:42.766190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.175 [2024-12-08 05:18:42.766205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.175 [2024-12-08 05:18:42.766218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:113040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.175 [2024-12-08 05:18:42.766228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.175 [2024-12-08 05:18:42.766239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:113072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.175 [2024-12-08 05:18:42.766249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.175 [2024-12-08 05:18:42.766265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:113080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.175 [2024-12-08 05:18:42.766280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.175 [2024-12-08 05:18:42.766293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:113088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.175 [2024-12-08 05:18:42.766302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.175 [2024-12-08 05:18:42.766318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:113112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.175 [2024-12-08 05:18:42.766329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.175 [2024-12-08 05:18:42.766340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:113120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.175 [2024-12-08 05:18:42.766349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.175 [2024-12-08 05:18:42.766365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:113128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.175 [2024-12-08 05:18:42.766381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.175 [2024-12-08 05:18:42.766399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:113136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.175 [2024-12-08 05:18:42.766409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.175 [2024-12-08 05:18:42.766421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:113736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.175 [2024-12-08 05:18:42.766429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.175 [2024-12-08 05:18:42.766441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:113744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.175 [2024-12-08 05:18:42.766453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.175 [2024-12-08 05:18:42.766469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:113752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.175 [2024-12-08 05:18:42.766482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.175 [2024-12-08 05:18:42.766494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:113760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.175 [2024-12-08 05:18:42.766504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.175 [2024-12-08 05:18:42.766520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:113768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.175 [2024-12-08 05:18:42.766530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.175 [2024-12-08 05:18:42.766548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.175 [2024-12-08 05:18:42.766565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.175 [2024-12-08 05:18:42.766582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:113784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.175 [2024-12-08 05:18:42.766592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.175 [2024-12-08 05:18:42.766604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:113792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.175 [2024-12-08 05:18:42.766613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.175 [2024-12-08 05:18:42.766629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:113800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.175 [2024-12-08 05:18:42.766639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.175 [2024-12-08 05:18:42.766652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:113808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.175 [2024-12-08 05:18:42.766667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.175 [2024-12-08 05:18:42.766702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:113816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.175 [2024-12-08 05:18:42.766713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.175 [2024-12-08 05:18:42.766725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:113824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.175 [2024-12-08 05:18:42.766736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.175 [2024-12-08 05:18:42.766754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:113832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.175 [2024-12-08 05:18:42.766770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.175 [2024-12-08 05:18:42.766786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:113840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.175 [2024-12-08 05:18:42.766796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.175 [2024-12-08 05:18:42.766808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:113144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.175 [2024-12-08 05:18:42.766817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.175 [2024-12-08 05:18:42.766829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:113192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.175 [2024-12-08 05:18:42.766842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.175 [2024-12-08 05:18:42.766855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:113200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.175 [2024-12-08 05:18:42.766864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.175 [2024-12-08 05:18:42.766877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:113208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.175 [2024-12-08 05:18:42.766899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.175 [2024-12-08 05:18:42.766918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:113224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.175 [2024-12-08 05:18:42.766933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.175 [2024-12-08 05:18:42.766945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:113248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.175 [2024-12-08 05:18:42.766954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.176 [2024-12-08 05:18:42.766968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:113256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.176 [2024-12-08 05:18:42.766980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.176 [2024-12-08 05:18:42.766992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:113272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.176 [2024-12-08 05:18:42.767002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.176 [2024-12-08 05:18:42.767013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:113848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.176 [2024-12-08 05:18:42.767024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.176 [2024-12-08 05:18:42.767042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:113856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.176 [2024-12-08 05:18:42.767058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.176 [2024-12-08 05:18:42.767075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:113864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.176 [2024-12-08 05:18:42.767084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.176 [2024-12-08 05:18:42.767096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:113872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.176 [2024-12-08 05:18:42.767105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.176 [2024-12-08 05:18:42.767120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:113880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.176 [2024-12-08 05:18:42.767135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.176 [2024-12-08 05:18:42.767150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:113888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.176 [2024-12-08 05:18:42.767159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.176 [2024-12-08 05:18:42.767172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:113896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.176 [2024-12-08 05:18:42.767185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.176 [2024-12-08 05:18:42.767197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:113904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.176 [2024-12-08 05:18:42.767207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.176 [2024-12-08 05:18:42.767225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:113912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.176 [2024-12-08 05:18:42.767241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.176 [2024-12-08 05:18:42.767258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:113920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.176 [2024-12-08 05:18:42.767269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.176 [2024-12-08 05:18:42.767280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:113928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.176 [2024-12-08 05:18:42.767289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.176 [2024-12-08 05:18:42.767305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:113936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.176 [2024-12-08 05:18:42.767318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.176 [2024-12-08 05:18:42.767336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:113944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.176 [2024-12-08 05:18:42.767352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.176 [2024-12-08 05:18:42.767381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:113952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.176 [2024-12-08 05:18:42.767392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.176 [2024-12-08 05:18:42.767404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:113960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.176 [2024-12-08 05:18:42.767413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.176 [2024-12-08 05:18:42.767424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:113968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.176 [2024-12-08 05:18:42.767434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.176 [2024-12-08 05:18:42.767457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:113976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.176 [2024-12-08 05:18:42.767473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.176 [2024-12-08 05:18:42.767491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:113984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.176 [2024-12-08 05:18:42.767504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.176 [2024-12-08 05:18:42.767516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:113288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.176 [2024-12-08 05:18:42.767525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.176 [2024-12-08 05:18:42.767536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:113312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.176 [2024-12-08 05:18:42.767549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.176 [2024-12-08 05:18:42.767566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:113336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.176 [2024-12-08 05:18:42.767583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.176 [2024-12-08 05:18:42.767601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:113344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.176 [2024-12-08 05:18:42.767611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.176 [2024-12-08 05:18:42.767622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:113360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.176 [2024-12-08 05:18:42.767632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.176 [2024-12-08 05:18:42.767649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:113368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.176 [2024-12-08 05:18:42.767665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.176 [2024-12-08 05:18:42.767693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:113384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.176 [2024-12-08 05:18:42.767703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.176 [2024-12-08 05:18:42.767716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:113392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.176 [2024-12-08 05:18:42.767730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.176 [2024-12-08 05:18:42.767749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:113992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.176 [2024-12-08 05:18:42.767764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.176 [2024-12-08 05:18:42.767776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.176 [2024-12-08 05:18:42.767785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.176 [2024-12-08 05:18:42.767797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:114008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.176 [2024-12-08 05:18:42.767811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.176 [2024-12-08 05:18:42.767829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:114016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.176 [2024-12-08 05:18:42.767839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.176 [2024-12-08 05:18:42.767852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.176 [2024-12-08 05:18:42.767868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.176 [2024-12-08 05:18:42.767886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:114032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.176 [2024-12-08 05:18:42.767897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.176 [2024-12-08 05:18:42.767909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:114040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.176 [2024-12-08 05:18:42.767918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.176 [2024-12-08 05:18:42.767935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:114048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.176 [2024-12-08 05:18:42.767950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.176 [2024-12-08 05:18:42.767963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:114056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.176 [2024-12-08 05:18:42.767973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.176 [2024-12-08 05:18:42.767984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:114064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.176 [2024-12-08 05:18:42.767996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.176 [2024-12-08 05:18:42.768013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:114072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.177 [2024-12-08 05:18:42.768023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.177 [2024-12-08 05:18:42.768034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:114080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.177 [2024-12-08 05:18:42.768043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.177 [2024-12-08 05:18:42.768059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:114088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.177 [2024-12-08 05:18:42.768073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.177 [2024-12-08 05:18:42.768091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:114096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.177 [2024-12-08 05:18:42.768101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.177 [2024-12-08 05:18:42.768112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:114104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.177 [2024-12-08 05:18:42.768122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.177 [2024-12-08 05:18:42.768139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:114112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.177 [2024-12-08 05:18:42.768152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.177 [2024-12-08 05:18:42.768163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:114120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.177 [2024-12-08 05:18:42.768174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.177 [2024-12-08 05:18:42.768190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:114128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.177 [2024-12-08 05:18:42.768207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.177 [2024-12-08 05:18:42.768226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:114136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.177 [2024-12-08 05:18:42.768239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.177 [2024-12-08 05:18:42.768250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:113416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.177 [2024-12-08 05:18:42.768261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.177 [2024-12-08 05:18:42.768278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:113424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.177 [2024-12-08 05:18:42.768293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.177 [2024-12-08 05:18:42.768305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:113432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.177 [2024-12-08 05:18:42.768314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.177 [2024-12-08 05:18:42.768331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:113512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.177 [2024-12-08 05:18:42.768347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.177 [2024-12-08 05:18:42.768365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:113520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.177 [2024-12-08 05:18:42.768375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.177 [2024-12-08 05:18:42.768386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:113536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.177 [2024-12-08 05:18:42.768397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.177 [2024-12-08 05:18:42.768413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:113552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.177 [2024-12-08 05:18:42.768429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.177 [2024-12-08 05:18:42.768447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:113576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.177 [2024-12-08 05:18:42.768457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.177 [2024-12-08 05:18:42.768470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:114144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.177 [2024-12-08 05:18:42.768483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.177 [2024-12-08 05:18:42.768498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:114152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.177 [2024-12-08 05:18:42.768514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.177 [2024-12-08 05:18:42.768533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:114160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.177 [2024-12-08 05:18:42.768545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.177 [2024-12-08 05:18:42.768556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:114168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.177 [2024-12-08 05:18:42.768566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.177 [2024-12-08 05:18:42.768580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:114176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.177 [2024-12-08 05:18:42.768595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.177 [2024-12-08 05:18:42.768609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:114184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.177 [2024-12-08 05:18:42.768619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.177 [2024-12-08 05:18:42.768630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:114192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.177 [2024-12-08 05:18:42.768643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.177 [2024-12-08 05:18:42.768662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:114200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.177 [2024-12-08 05:18:42.768696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.177 [2024-12-08 05:18:42.768710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:114208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.177 [2024-12-08 05:18:42.768719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.177 [2024-12-08 05:18:42.768737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:114216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.177 [2024-12-08 05:18:42.768755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.177 [2024-12-08 05:18:42.768768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:114224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.177 [2024-12-08 05:18:42.768777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.177 [2024-12-08 05:18:42.768792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:114232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.177 [2024-12-08 05:18:42.768808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.177 [2024-12-08 05:18:42.768826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:114240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.177 [2024-12-08 05:18:42.768837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.177 [2024-12-08 05:18:42.768848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:114248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.177 [2024-12-08 05:18:42.768857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.177 [2024-12-08 05:18:42.768869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:114256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.177 [2024-12-08 05:18:42.768882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.177 [2024-12-08 05:18:42.768894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:114264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.177 [2024-12-08 05:18:42.768903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.177 [2024-12-08 05:18:42.768914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:114272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.177 [2024-12-08 05:18:42.768926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.177 [2024-12-08 05:18:42.768945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:114280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.177 [2024-12-08 05:18:42.768961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.177 [2024-12-08 05:18:42.768979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:113592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.177 [2024-12-08 05:18:42.768989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.177 [2024-12-08 05:18:42.769001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:113600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.177 [2024-12-08 05:18:42.769011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.177 [2024-12-08 05:18:42.769027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:113616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.177 [2024-12-08 05:18:42.769043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.177 [2024-12-08 05:18:42.769061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.178 [2024-12-08 05:18:42.769071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.178 [2024-12-08 05:18:42.769084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:113672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.178 [2024-12-08 05:18:42.769097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.178 [2024-12-08 05:18:42.769116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:113680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.178 [2024-12-08 05:18:42.769132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.178 [2024-12-08 05:18:42.769149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:113696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.178 [2024-12-08 05:18:42.769159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.178 [2024-12-08 05:18:42.769172] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cbe40 is same with the state(5) to be set 00:19:53.178 [2024-12-08 05:18:42.769190] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:53.178 [2024-12-08 05:18:42.769199] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:53.178 [2024-12-08 05:18:42.769208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113704 len:8 PRP1 0x0 PRP2 0x0 00:19:53.178 [2024-12-08 05:18:42.769217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.178 [2024-12-08 05:18:42.769269] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x11cbe40 was disconnected and freed. reset controller. 00:19:53.178 [2024-12-08 05:18:42.769358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:53.178 [2024-12-08 05:18:42.769378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.178 [2024-12-08 05:18:42.769395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:53.178 [2024-12-08 05:18:42.769410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.178 [2024-12-08 05:18:42.769423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:53.178 [2024-12-08 05:18:42.769436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.178 [2024-12-08 05:18:42.769452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:53.178 [2024-12-08 05:18:42.769465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.178 [2024-12-08 05:18:42.769475] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10c5c20 is same with the state(5) to be set 00:19:53.178 [2024-12-08 05:18:42.769772] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:53.178 [2024-12-08 05:18:42.769812] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10c5c20 (9): Bad file descriptor 00:19:53.178 [2024-12-08 05:18:42.769913] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:53.178 [2024-12-08 05:18:42.769986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:53.178 [2024-12-08 05:18:42.770044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:53.178 [2024-12-08 05:18:42.770069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10c5c20 with addr=10.0.0.2, port=4420 00:19:53.178 [2024-12-08 05:18:42.770086] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10c5c20 is same with the state(5) to be set 00:19:53.178 [2024-12-08 05:18:42.770116] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10c5c20 (9): Bad file descriptor 00:19:53.178 [2024-12-08 05:18:42.770135] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:53.178 [2024-12-08 05:18:42.770147] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:53.178 [2024-12-08 05:18:42.770162] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:53.178 [2024-12-08 05:18:42.770189] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:53.178 [2024-12-08 05:18:42.770204] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:53.178 05:18:42 -- host/timeout.sh@101 -- # sleep 3 00:19:54.175 [2024-12-08 05:18:43.770330] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:54.175 [2024-12-08 05:18:43.770423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:54.176 [2024-12-08 05:18:43.770468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:54.176 [2024-12-08 05:18:43.770484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10c5c20 with addr=10.0.0.2, port=4420 00:19:54.176 [2024-12-08 05:18:43.770498] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10c5c20 is same with the state(5) to be set 00:19:54.176 [2024-12-08 05:18:43.770523] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10c5c20 (9): Bad file descriptor 00:19:54.176 [2024-12-08 05:18:43.770543] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:54.176 [2024-12-08 05:18:43.770553] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:54.176 [2024-12-08 05:18:43.770564] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:54.176 [2024-12-08 05:18:43.770591] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:54.176 [2024-12-08 05:18:43.770603] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:55.110 [2024-12-08 05:18:44.770749] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:55.110 [2024-12-08 05:18:44.770845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:55.110 [2024-12-08 05:18:44.770891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:55.110 [2024-12-08 05:18:44.770921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10c5c20 with addr=10.0.0.2, port=4420 00:19:55.110 [2024-12-08 05:18:44.770942] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10c5c20 is same with the state(5) to be set 00:19:55.110 [2024-12-08 05:18:44.770984] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10c5c20 (9): Bad file descriptor 00:19:55.110 [2024-12-08 05:18:44.771016] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:55.110 [2024-12-08 05:18:44.771027] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:55.110 [2024-12-08 05:18:44.771037] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:55.110 [2024-12-08 05:18:44.771066] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:55.110 [2024-12-08 05:18:44.771083] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:56.042 [2024-12-08 05:18:45.773114] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:56.042 [2024-12-08 05:18:45.773209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:56.042 [2024-12-08 05:18:45.773254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:56.042 [2024-12-08 05:18:45.773271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10c5c20 with addr=10.0.0.2, port=4420 00:19:56.042 [2024-12-08 05:18:45.773284] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10c5c20 is same with the state(5) to be set 00:19:56.042 [2024-12-08 05:18:45.773480] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10c5c20 (9): Bad file descriptor 00:19:56.042 [2024-12-08 05:18:45.773602] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:56.042 [2024-12-08 05:18:45.773626] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:56.042 [2024-12-08 05:18:45.773638] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:56.042 [2024-12-08 05:18:45.776205] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:56.042 [2024-12-08 05:18:45.776235] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:56.042 05:18:45 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:56.299 [2024-12-08 05:18:46.060864] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:56.557 05:18:46 -- host/timeout.sh@103 -- # wait 86037 00:19:57.121 [2024-12-08 05:18:46.808450] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:02.410 00:20:02.410 Latency(us) 00:20:02.410 [2024-12-08T05:18:52.196Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:02.410 [2024-12-08T05:18:52.196Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:02.410 Verification LBA range: start 0x0 length 0x4000 00:20:02.410 NVMe0n1 : 10.01 7802.31 30.48 5653.71 0.00 9495.80 428.22 3019898.88 00:20:02.410 [2024-12-08T05:18:52.196Z] =================================================================================================================== 00:20:02.410 [2024-12-08T05:18:52.196Z] Total : 7802.31 30.48 5653.71 0.00 9495.80 0.00 3019898.88 00:20:02.410 0 00:20:02.410 05:18:51 -- host/timeout.sh@105 -- # killprocess 85908 00:20:02.410 05:18:51 -- common/autotest_common.sh@936 -- # '[' -z 85908 ']' 00:20:02.410 05:18:51 -- common/autotest_common.sh@940 -- # kill -0 85908 00:20:02.410 05:18:51 -- common/autotest_common.sh@941 -- # uname 00:20:02.410 05:18:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:02.410 05:18:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85908 00:20:02.410 05:18:51 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:02.410 05:18:51 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:02.410 05:18:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85908' 00:20:02.410 killing process with pid 85908 00:20:02.410 05:18:51 -- common/autotest_common.sh@955 -- # kill 85908 00:20:02.410 Received shutdown signal, test time was about 10.000000 seconds 00:20:02.410 00:20:02.410 Latency(us) 00:20:02.410 [2024-12-08T05:18:52.196Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:02.410 [2024-12-08T05:18:52.196Z] =================================================================================================================== 00:20:02.410 [2024-12-08T05:18:52.196Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:02.410 05:18:51 -- common/autotest_common.sh@960 -- # wait 85908 00:20:02.410 05:18:51 -- host/timeout.sh@110 -- # bdevperf_pid=86151 00:20:02.410 05:18:51 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:20:02.410 05:18:51 -- host/timeout.sh@112 -- # waitforlisten 86151 /var/tmp/bdevperf.sock 00:20:02.410 05:18:51 -- common/autotest_common.sh@829 -- # '[' -z 86151 ']' 00:20:02.410 05:18:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:02.410 05:18:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:02.410 05:18:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:02.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:02.410 05:18:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:02.410 05:18:51 -- common/autotest_common.sh@10 -- # set +x 00:20:02.410 [2024-12-08 05:18:51.893764] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:02.410 [2024-12-08 05:18:51.894704] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86151 ] 00:20:02.410 [2024-12-08 05:18:52.038728] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.410 [2024-12-08 05:18:52.086448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:03.382 05:18:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:03.382 05:18:52 -- common/autotest_common.sh@862 -- # return 0 00:20:03.382 05:18:52 -- host/timeout.sh@116 -- # dtrace_pid=86173 00:20:03.382 05:18:52 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86151 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:20:03.382 05:18:52 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:20:03.641 05:18:53 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:20:03.899 NVMe0n1 00:20:03.899 05:18:53 -- host/timeout.sh@124 -- # rpc_pid=86209 00:20:03.899 05:18:53 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:03.899 05:18:53 -- host/timeout.sh@125 -- # sleep 1 00:20:03.899 Running I/O for 10 seconds... 00:20:04.832 05:18:54 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:05.092 [2024-12-08 05:18:54.869579] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.092 [2024-12-08 05:18:54.869642] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.092 [2024-12-08 05:18:54.869654] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.092 [2024-12-08 05:18:54.869663] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.092 [2024-12-08 05:18:54.869686] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.092 [2024-12-08 05:18:54.869697] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.092 [2024-12-08 05:18:54.869706] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.092 [2024-12-08 05:18:54.869714] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.092 [2024-12-08 05:18:54.869723] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.092 [2024-12-08 05:18:54.869731] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.869739] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.869747] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.869755] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.869763] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.869771] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.869779] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.869787] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.869795] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.869803] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.869811] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.869819] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.869827] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.869835] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.869843] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.869851] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.869859] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.869867] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.869875] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.869882] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.869890] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.869898] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.869906] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.869914] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.869923] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.869932] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.869940] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.869948] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.869956] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.869964] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.869972] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.869981] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.869989] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.869997] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.870005] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.870012] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.870020] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.870028] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.870036] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.870044] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.870052] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.870060] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.870068] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.870076] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.870084] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.870092] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.870101] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.870109] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.870118] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.870127] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.870135] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.870143] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.870151] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.870159] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.870167] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.870175] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.870182] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.870192] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.870200] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.870208] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.870216] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.870224] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.870231] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.870239] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.870247] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.870255] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.870263] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.870271] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.870279] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.870287] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.870295] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.093 [2024-12-08 05:18:54.870303] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.094 [2024-12-08 05:18:54.870311] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.094 [2024-12-08 05:18:54.870319] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.094 [2024-12-08 05:18:54.870327] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.094 [2024-12-08 05:18:54.870335] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.094 [2024-12-08 05:18:54.870343] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.094 [2024-12-08 05:18:54.870351] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.094 [2024-12-08 05:18:54.870360] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.094 [2024-12-08 05:18:54.870368] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.094 [2024-12-08 05:18:54.870376] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.094 [2024-12-08 05:18:54.870384] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.094 [2024-12-08 05:18:54.870392] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.094 [2024-12-08 05:18:54.870400] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.094 [2024-12-08 05:18:54.870408] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.094 [2024-12-08 05:18:54.870415] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.094 [2024-12-08 05:18:54.870423] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.094 [2024-12-08 05:18:54.870431] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.094 [2024-12-08 05:18:54.870439] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.094 [2024-12-08 05:18:54.870447] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.094 [2024-12-08 05:18:54.870455] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.094 [2024-12-08 05:18:54.870463] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.094 [2024-12-08 05:18:54.870471] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.094 [2024-12-08 05:18:54.870489] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.094 [2024-12-08 05:18:54.870501] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.094 [2024-12-08 05:18:54.870510] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.094 [2024-12-08 05:18:54.870518] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.094 [2024-12-08 05:18:54.870526] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.094 [2024-12-08 05:18:54.870534] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.094 [2024-12-08 05:18:54.870542] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.094 [2024-12-08 05:18:54.870550] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.094 [2024-12-08 05:18:54.870557] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.094 [2024-12-08 05:18:54.870565] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.094 [2024-12-08 05:18:54.870573] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.094 [2024-12-08 05:18:54.870581] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.094 [2024-12-08 05:18:54.870589] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.094 [2024-12-08 05:18:54.870597] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.094 [2024-12-08 05:18:54.870605] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.094 [2024-12-08 05:18:54.870612] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.094 [2024-12-08 05:18:54.870621] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.094 [2024-12-08 05:18:54.870638] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.094 [2024-12-08 05:18:54.870646] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.094 [2024-12-08 05:18:54.870655] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.094 [2024-12-08 05:18:54.870663] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.094 [2024-12-08 05:18:54.870671] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.094 [2024-12-08 05:18:54.870691] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.094 [2024-12-08 05:18:54.870699] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c32f0 is same with the state(5) to be set 00:20:05.094 [2024-12-08 05:18:54.870790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.094 [2024-12-08 05:18:54.870820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.094 [2024-12-08 05:18:54.870843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:102136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.094 [2024-12-08 05:18:54.870854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.094 [2024-12-08 05:18:54.870865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:111248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.094 [2024-12-08 05:18:54.870875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.094 [2024-12-08 05:18:54.870887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:79080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.094 [2024-12-08 05:18:54.870897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.094 [2024-12-08 05:18:54.870908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.094 [2024-12-08 05:18:54.870917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.094 [2024-12-08 05:18:54.870928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:111160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.094 [2024-12-08 05:18:54.870937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.094 [2024-12-08 05:18:54.870948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.094 [2024-12-08 05:18:54.870957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.094 [2024-12-08 05:18:54.870968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.094 [2024-12-08 05:18:54.870977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.094 [2024-12-08 05:18:54.870988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:35936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.094 [2024-12-08 05:18:54.870997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.094 [2024-12-08 05:18:54.871008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:90408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.094 [2024-12-08 05:18:54.871017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.094 [2024-12-08 05:18:54.871028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.094 [2024-12-08 05:18:54.871037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.094 [2024-12-08 05:18:54.871049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:93832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.094 [2024-12-08 05:18:54.871058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.094 [2024-12-08 05:18:54.871069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.094 [2024-12-08 05:18:54.871078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.094 [2024-12-08 05:18:54.871089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:60392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.094 [2024-12-08 05:18:54.871098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.094 [2024-12-08 05:18:54.871109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:93288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.094 [2024-12-08 05:18:54.871118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.095 [2024-12-08 05:18:54.871130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.095 [2024-12-08 05:18:54.871139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.095 [2024-12-08 05:18:54.871150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.095 [2024-12-08 05:18:54.871160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.095 [2024-12-08 05:18:54.871172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:103464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.095 [2024-12-08 05:18:54.871184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.095 [2024-12-08 05:18:54.871195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.095 [2024-12-08 05:18:54.871204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.095 [2024-12-08 05:18:54.871215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:46984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.095 [2024-12-08 05:18:54.871224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.095 [2024-12-08 05:18:54.871235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.095 [2024-12-08 05:18:54.871244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.095 [2024-12-08 05:18:54.871255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:122104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.095 [2024-12-08 05:18:54.871264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.095 [2024-12-08 05:18:54.871274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:102304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.095 [2024-12-08 05:18:54.871283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.095 [2024-12-08 05:18:54.871295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:36536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.095 [2024-12-08 05:18:54.871304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.095 [2024-12-08 05:18:54.871315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:39392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.095 [2024-12-08 05:18:54.871323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.095 [2024-12-08 05:18:54.871334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:39696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.095 [2024-12-08 05:18:54.871343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.095 [2024-12-08 05:18:54.871355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:79848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.095 [2024-12-08 05:18:54.871375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.095 [2024-12-08 05:18:54.871387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:104704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.095 [2024-12-08 05:18:54.871396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.095 [2024-12-08 05:18:54.871408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:46376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.095 [2024-12-08 05:18:54.871417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.095 [2024-12-08 05:18:54.871428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:50312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.095 [2024-12-08 05:18:54.871437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.095 [2024-12-08 05:18:54.871448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:88432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.095 [2024-12-08 05:18:54.871457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.095 [2024-12-08 05:18:54.871469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:95056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.095 [2024-12-08 05:18:54.871478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.095 [2024-12-08 05:18:54.871489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.095 [2024-12-08 05:18:54.871502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.095 [2024-12-08 05:18:54.871513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:57616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.095 [2024-12-08 05:18:54.871523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.095 [2024-12-08 05:18:54.871534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:48832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.095 [2024-12-08 05:18:54.871543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.095 [2024-12-08 05:18:54.871554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:5128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.095 [2024-12-08 05:18:54.871563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.095 [2024-12-08 05:18:54.871574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:125336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.095 [2024-12-08 05:18:54.871583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.095 [2024-12-08 05:18:54.871594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.095 [2024-12-08 05:18:54.871603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.095 [2024-12-08 05:18:54.871616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:92136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.095 [2024-12-08 05:18:54.871624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.095 [2024-12-08 05:18:54.871636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:97040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.095 [2024-12-08 05:18:54.871645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.095 [2024-12-08 05:18:54.871660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:91840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.095 [2024-12-08 05:18:54.871690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.095 [2024-12-08 05:18:54.871723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:59704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.095 [2024-12-08 05:18:54.871735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.095 [2024-12-08 05:18:54.871747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:105064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.095 [2024-12-08 05:18:54.871756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.095 [2024-12-08 05:18:54.871766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:74184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.095 [2024-12-08 05:18:54.871775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.095 [2024-12-08 05:18:54.871789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.095 [2024-12-08 05:18:54.871804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.095 [2024-12-08 05:18:54.871822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:35744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.095 [2024-12-08 05:18:54.871832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.095 [2024-12-08 05:18:54.871844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:127296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.095 [2024-12-08 05:18:54.871859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.095 [2024-12-08 05:18:54.871876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:76568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.095 [2024-12-08 05:18:54.871892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.095 [2024-12-08 05:18:54.871904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:73288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.095 [2024-12-08 05:18:54.871917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.095 [2024-12-08 05:18:54.871928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:93864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.095 [2024-12-08 05:18:54.871937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.095 [2024-12-08 05:18:54.871949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:36080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.095 [2024-12-08 05:18:54.871958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.096 [2024-12-08 05:18:54.871971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:70496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.096 [2024-12-08 05:18:54.871981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.096 [2024-12-08 05:18:54.871992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:114096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.096 [2024-12-08 05:18:54.872001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.096 [2024-12-08 05:18:54.872012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:33128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.096 [2024-12-08 05:18:54.872021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.096 [2024-12-08 05:18:54.872032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:107392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.096 [2024-12-08 05:18:54.872041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.096 [2024-12-08 05:18:54.872052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:73856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.096 [2024-12-08 05:18:54.872061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.096 [2024-12-08 05:18:54.872072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:95240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.096 [2024-12-08 05:18:54.872081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.096 [2024-12-08 05:18:54.872092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:42120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.096 [2024-12-08 05:18:54.872101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.096 [2024-12-08 05:18:54.872112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:118896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.096 [2024-12-08 05:18:54.872121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.096 [2024-12-08 05:18:54.872132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:45544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.096 [2024-12-08 05:18:54.872141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.096 [2024-12-08 05:18:54.872153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:99744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.096 [2024-12-08 05:18:54.872162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.096 [2024-12-08 05:18:54.872173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:39464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.096 [2024-12-08 05:18:54.872181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.096 [2024-12-08 05:18:54.872192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:66768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.096 [2024-12-08 05:18:54.872201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.096 [2024-12-08 05:18:54.872212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:110704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.096 [2024-12-08 05:18:54.872221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.096 [2024-12-08 05:18:54.872232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:70688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.096 [2024-12-08 05:18:54.872243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.096 [2024-12-08 05:18:54.872254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:71160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.096 [2024-12-08 05:18:54.872263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.096 [2024-12-08 05:18:54.872274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:120600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.096 [2024-12-08 05:18:54.872283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.096 [2024-12-08 05:18:54.872296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:130696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.096 [2024-12-08 05:18:54.872305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.096 [2024-12-08 05:18:54.872316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:102128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.096 [2024-12-08 05:18:54.872325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.096 [2024-12-08 05:18:54.872336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:77072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.096 [2024-12-08 05:18:54.872345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.096 [2024-12-08 05:18:54.872356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.096 [2024-12-08 05:18:54.872365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.096 [2024-12-08 05:18:54.872376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:100368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.096 [2024-12-08 05:18:54.872385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.096 [2024-12-08 05:18:54.872396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:25272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.096 [2024-12-08 05:18:54.872405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.096 [2024-12-08 05:18:54.872416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:91664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.096 [2024-12-08 05:18:54.872425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.096 [2024-12-08 05:18:54.872445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:68360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.096 [2024-12-08 05:18:54.872453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.096 [2024-12-08 05:18:54.872464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:119384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.096 [2024-12-08 05:18:54.872473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.096 [2024-12-08 05:18:54.872484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:43472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.096 [2024-12-08 05:18:54.872492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.096 [2024-12-08 05:18:54.872504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:51496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.096 [2024-12-08 05:18:54.872512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.096 [2024-12-08 05:18:54.872523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:25896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.096 [2024-12-08 05:18:54.872532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.096 [2024-12-08 05:18:54.872543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:68712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.096 [2024-12-08 05:18:54.872552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.096 [2024-12-08 05:18:54.872563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:7152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.096 [2024-12-08 05:18:54.872574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.096 [2024-12-08 05:18:54.872592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.096 [2024-12-08 05:18:54.872608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.096 [2024-12-08 05:18:54.872625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:28240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.096 [2024-12-08 05:18:54.872639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.096 [2024-12-08 05:18:54.872655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:40584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.096 [2024-12-08 05:18:54.872664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.096 [2024-12-08 05:18:54.872688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:97880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.096 [2024-12-08 05:18:54.872699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.097 [2024-12-08 05:18:54.872710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:128424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.097 [2024-12-08 05:18:54.872719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.097 [2024-12-08 05:18:54.872730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:100464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.097 [2024-12-08 05:18:54.872739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.097 [2024-12-08 05:18:54.872750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:39584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.097 [2024-12-08 05:18:54.872759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.097 [2024-12-08 05:18:54.872770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:29656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.097 [2024-12-08 05:18:54.872780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.097 [2024-12-08 05:18:54.872791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:100056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.097 [2024-12-08 05:18:54.872800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.097 [2024-12-08 05:18:54.872811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.097 [2024-12-08 05:18:54.872819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.097 [2024-12-08 05:18:54.872830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:52264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.097 [2024-12-08 05:18:54.872839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.097 [2024-12-08 05:18:54.872850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.097 [2024-12-08 05:18:54.872859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.097 [2024-12-08 05:18:54.872870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:86432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.097 [2024-12-08 05:18:54.872879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.097 [2024-12-08 05:18:54.872890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:25416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.097 [2024-12-08 05:18:54.872899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.097 [2024-12-08 05:18:54.872909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:59368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.097 [2024-12-08 05:18:54.872918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.097 [2024-12-08 05:18:54.872929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:128664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.097 [2024-12-08 05:18:54.872942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.097 [2024-12-08 05:18:54.872953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.097 [2024-12-08 05:18:54.872962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.097 [2024-12-08 05:18:54.872973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:41856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.097 [2024-12-08 05:18:54.872982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.097 [2024-12-08 05:18:54.872995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:48216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.097 [2024-12-08 05:18:54.873004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.097 [2024-12-08 05:18:54.873015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:77032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.097 [2024-12-08 05:18:54.873024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.097 [2024-12-08 05:18:54.873035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:78904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.097 [2024-12-08 05:18:54.873044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.097 [2024-12-08 05:18:54.873055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:78800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.097 [2024-12-08 05:18:54.873064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.097 [2024-12-08 05:18:54.873076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:24920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.097 [2024-12-08 05:18:54.873084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.097 [2024-12-08 05:18:54.873095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:84048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.097 [2024-12-08 05:18:54.873104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.097 [2024-12-08 05:18:54.873115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:65480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.097 [2024-12-08 05:18:54.873124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.097 [2024-12-08 05:18:54.873134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:29104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.097 [2024-12-08 05:18:54.873143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.097 [2024-12-08 05:18:54.873154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:84040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.097 [2024-12-08 05:18:54.873163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.097 [2024-12-08 05:18:54.873174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:85792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.097 [2024-12-08 05:18:54.873183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.097 [2024-12-08 05:18:54.873193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:11408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.097 [2024-12-08 05:18:54.873202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.097 [2024-12-08 05:18:54.873213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:45096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.097 [2024-12-08 05:18:54.873222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.097 [2024-12-08 05:18:54.873233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:93192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.097 [2024-12-08 05:18:54.873242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.097 [2024-12-08 05:18:54.873253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:50464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.097 [2024-12-08 05:18:54.873264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.097 [2024-12-08 05:18:54.873275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:114736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.097 [2024-12-08 05:18:54.873284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.097 [2024-12-08 05:18:54.873295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.097 [2024-12-08 05:18:54.873304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.097 [2024-12-08 05:18:54.873316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:49408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.097 [2024-12-08 05:18:54.873325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.097 [2024-12-08 05:18:54.873336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:72592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.097 [2024-12-08 05:18:54.873345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.098 [2024-12-08 05:18:54.873356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:33216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.098 [2024-12-08 05:18:54.873365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.098 [2024-12-08 05:18:54.873376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:91768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.098 [2024-12-08 05:18:54.873385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.098 [2024-12-08 05:18:54.873396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:50464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.098 [2024-12-08 05:18:54.873404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.098 [2024-12-08 05:18:54.873415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:74032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.098 [2024-12-08 05:18:54.873424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.098 [2024-12-08 05:18:54.873435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:92672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.098 [2024-12-08 05:18:54.873443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.098 [2024-12-08 05:18:54.873454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:32720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.098 [2024-12-08 05:18:54.873463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.098 [2024-12-08 05:18:54.873474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:110528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.098 [2024-12-08 05:18:54.873483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.098 [2024-12-08 05:18:54.873493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:37448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.098 [2024-12-08 05:18:54.873503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.098 [2024-12-08 05:18:54.873513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:62200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.098 [2024-12-08 05:18:54.873524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.098 [2024-12-08 05:18:54.873542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:113880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.098 [2024-12-08 05:18:54.873558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.098 [2024-12-08 05:18:54.873570] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944070 is same with the state(5) to be set 00:20:05.098 [2024-12-08 05:18:54.873583] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:05.098 [2024-12-08 05:18:54.873591] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:05.098 [2024-12-08 05:18:54.873602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57792 len:8 PRP1 0x0 PRP2 0x0 00:20:05.098 [2024-12-08 05:18:54.873611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.098 [2024-12-08 05:18:54.873654] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1944070 was disconnected and freed. reset controller. 00:20:05.098 [2024-12-08 05:18:54.873777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.098 [2024-12-08 05:18:54.873804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.098 [2024-12-08 05:18:54.873817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.098 [2024-12-08 05:18:54.873826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.098 [2024-12-08 05:18:54.873836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.098 [2024-12-08 05:18:54.873845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.098 [2024-12-08 05:18:54.873855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.098 [2024-12-08 05:18:54.873863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.098 [2024-12-08 05:18:54.873872] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911ea0 is same with the state(5) to be set 00:20:05.098 [2024-12-08 05:18:54.874122] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:05.098 [2024-12-08 05:18:54.874146] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1911ea0 (9): Bad file descriptor 00:20:05.098 [2024-12-08 05:18:54.874252] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:05.098 [2024-12-08 05:18:54.874346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:05.098 [2024-12-08 05:18:54.874401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:05.098 [2024-12-08 05:18:54.874420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1911ea0 with addr=10.0.0.2, port=4420 00:20:05.098 [2024-12-08 05:18:54.874431] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911ea0 is same with the state(5) to be set 00:20:05.098 [2024-12-08 05:18:54.874451] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1911ea0 (9): Bad file descriptor 00:20:05.098 [2024-12-08 05:18:54.874468] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:05.098 [2024-12-08 05:18:54.874478] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:05.354 [2024-12-08 05:18:54.888758] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:05.354 [2024-12-08 05:18:54.888835] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:05.354 [2024-12-08 05:18:54.888857] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:05.354 05:18:54 -- host/timeout.sh@128 -- # wait 86209 00:20:07.261 [2024-12-08 05:18:56.889043] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.261 [2024-12-08 05:18:56.889152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.261 [2024-12-08 05:18:56.889201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.261 [2024-12-08 05:18:56.889220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1911ea0 with addr=10.0.0.2, port=4420 00:20:07.261 [2024-12-08 05:18:56.889234] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911ea0 is same with the state(5) to be set 00:20:07.261 [2024-12-08 05:18:56.889262] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1911ea0 (9): Bad file descriptor 00:20:07.261 [2024-12-08 05:18:56.889281] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:07.261 [2024-12-08 05:18:56.889291] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:07.261 [2024-12-08 05:18:56.889302] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:07.261 [2024-12-08 05:18:56.889331] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.261 [2024-12-08 05:18:56.889342] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:09.155 [2024-12-08 05:18:58.889492] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.155 [2024-12-08 05:18:58.889588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.155 [2024-12-08 05:18:58.889637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.155 [2024-12-08 05:18:58.889654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1911ea0 with addr=10.0.0.2, port=4420 00:20:09.155 [2024-12-08 05:18:58.889668] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911ea0 is same with the state(5) to be set 00:20:09.155 [2024-12-08 05:18:58.889707] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1911ea0 (9): Bad file descriptor 00:20:09.155 [2024-12-08 05:18:58.889728] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:09.155 [2024-12-08 05:18:58.889738] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:09.155 [2024-12-08 05:18:58.889749] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:09.155 [2024-12-08 05:18:58.889776] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.155 [2024-12-08 05:18:58.889787] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:11.703 [2024-12-08 05:19:00.889862] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:11.703 [2024-12-08 05:19:00.889935] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:11.703 [2024-12-08 05:19:00.889949] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:11.703 [2024-12-08 05:19:00.889967] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:20:11.703 [2024-12-08 05:19:00.889996] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:12.270 00:20:12.270 Latency(us) 00:20:12.270 [2024-12-08T05:19:02.056Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.270 [2024-12-08T05:19:02.056Z] Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:20:12.270 NVMe0n1 : 8.20 2136.88 8.35 15.61 0.00 59368.77 7864.32 7046430.72 00:20:12.270 [2024-12-08T05:19:02.056Z] =================================================================================================================== 00:20:12.270 [2024-12-08T05:19:02.056Z] Total : 2136.88 8.35 15.61 0.00 59368.77 7864.32 7046430.72 00:20:12.270 0 00:20:12.270 05:19:01 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:12.270 Attaching 5 probes... 00:20:12.270 1395.232075: reset bdev controller NVMe0 00:20:12.270 1395.297634: reconnect bdev controller NVMe0 00:20:12.270 3410.012365: reconnect delay bdev controller NVMe0 00:20:12.270 3410.036638: reconnect bdev controller NVMe0 00:20:12.270 5410.486996: reconnect delay bdev controller NVMe0 00:20:12.270 5410.507846: reconnect bdev controller NVMe0 00:20:12.270 7410.936090: reconnect delay bdev controller NVMe0 00:20:12.270 7410.965417: reconnect bdev controller NVMe0 00:20:12.270 05:19:01 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:20:12.270 05:19:01 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:20:12.270 05:19:01 -- host/timeout.sh@136 -- # kill 86173 00:20:12.270 05:19:01 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:12.270 05:19:01 -- host/timeout.sh@139 -- # killprocess 86151 00:20:12.270 05:19:01 -- common/autotest_common.sh@936 -- # '[' -z 86151 ']' 00:20:12.270 05:19:01 -- common/autotest_common.sh@940 -- # kill -0 86151 00:20:12.270 05:19:01 -- common/autotest_common.sh@941 -- # uname 00:20:12.270 05:19:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:12.270 05:19:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86151 00:20:12.270 killing process with pid 86151 00:20:12.270 Received shutdown signal, test time was about 8.282735 seconds 00:20:12.270 00:20:12.270 Latency(us) 00:20:12.270 [2024-12-08T05:19:02.056Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.270 [2024-12-08T05:19:02.056Z] =================================================================================================================== 00:20:12.270 [2024-12-08T05:19:02.056Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:12.270 05:19:01 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:12.270 05:19:01 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:12.270 05:19:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86151' 00:20:12.270 05:19:01 -- common/autotest_common.sh@955 -- # kill 86151 00:20:12.270 05:19:01 -- common/autotest_common.sh@960 -- # wait 86151 00:20:12.529 05:19:02 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:12.789 05:19:02 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:20:12.789 05:19:02 -- host/timeout.sh@145 -- # nvmftestfini 00:20:12.789 05:19:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:12.789 05:19:02 -- nvmf/common.sh@116 -- # sync 00:20:12.789 05:19:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:12.789 05:19:02 -- nvmf/common.sh@119 -- # set +e 00:20:12.789 05:19:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:12.789 05:19:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:12.789 rmmod nvme_tcp 00:20:12.789 rmmod nvme_fabrics 00:20:12.789 rmmod nvme_keyring 00:20:12.789 05:19:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:12.789 05:19:02 -- nvmf/common.sh@123 -- # set -e 00:20:12.789 05:19:02 -- nvmf/common.sh@124 -- # return 0 00:20:12.789 05:19:02 -- nvmf/common.sh@477 -- # '[' -n 85707 ']' 00:20:12.789 05:19:02 -- nvmf/common.sh@478 -- # killprocess 85707 00:20:12.789 05:19:02 -- common/autotest_common.sh@936 -- # '[' -z 85707 ']' 00:20:12.789 05:19:02 -- common/autotest_common.sh@940 -- # kill -0 85707 00:20:12.789 05:19:02 -- common/autotest_common.sh@941 -- # uname 00:20:12.789 05:19:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:12.789 05:19:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85707 00:20:12.789 killing process with pid 85707 00:20:12.789 05:19:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:12.789 05:19:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:12.789 05:19:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85707' 00:20:12.789 05:19:02 -- common/autotest_common.sh@955 -- # kill 85707 00:20:12.789 05:19:02 -- common/autotest_common.sh@960 -- # wait 85707 00:20:13.048 05:19:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:13.048 05:19:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:13.048 05:19:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:13.048 05:19:02 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:13.048 05:19:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:13.049 05:19:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:13.049 05:19:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:13.049 05:19:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:13.049 05:19:02 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:13.049 ************************************ 00:20:13.049 END TEST nvmf_timeout 00:20:13.049 ************************************ 00:20:13.049 00:20:13.049 real 0m47.634s 00:20:13.049 user 2m20.741s 00:20:13.049 sys 0m5.734s 00:20:13.049 05:19:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:13.049 05:19:02 -- common/autotest_common.sh@10 -- # set +x 00:20:13.049 05:19:02 -- nvmf/nvmf.sh@120 -- # [[ virt == phy ]] 00:20:13.049 05:19:02 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:20:13.049 05:19:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:13.049 05:19:02 -- common/autotest_common.sh@10 -- # set +x 00:20:13.049 05:19:02 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:20:13.049 ************************************ 00:20:13.049 END TEST nvmf_tcp 00:20:13.049 ************************************ 00:20:13.049 00:20:13.049 real 10m48.574s 00:20:13.049 user 30m5.819s 00:20:13.049 sys 3m27.481s 00:20:13.049 05:19:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:13.049 05:19:02 -- common/autotest_common.sh@10 -- # set +x 00:20:13.049 05:19:02 -- spdk/autotest.sh@283 -- # [[ 1 -eq 0 ]] 00:20:13.049 05:19:02 -- spdk/autotest.sh@287 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:20:13.049 05:19:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:13.049 05:19:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:13.049 05:19:02 -- common/autotest_common.sh@10 -- # set +x 00:20:13.049 ************************************ 00:20:13.049 START TEST nvmf_dif 00:20:13.049 ************************************ 00:20:13.049 05:19:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:20:13.308 * Looking for test storage... 00:20:13.308 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:13.308 05:19:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:13.308 05:19:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:13.308 05:19:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:13.308 05:19:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:13.308 05:19:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:13.308 05:19:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:13.308 05:19:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:13.308 05:19:02 -- scripts/common.sh@335 -- # IFS=.-: 00:20:13.308 05:19:02 -- scripts/common.sh@335 -- # read -ra ver1 00:20:13.308 05:19:02 -- scripts/common.sh@336 -- # IFS=.-: 00:20:13.308 05:19:02 -- scripts/common.sh@336 -- # read -ra ver2 00:20:13.308 05:19:02 -- scripts/common.sh@337 -- # local 'op=<' 00:20:13.308 05:19:02 -- scripts/common.sh@339 -- # ver1_l=2 00:20:13.308 05:19:02 -- scripts/common.sh@340 -- # ver2_l=1 00:20:13.308 05:19:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:13.308 05:19:02 -- scripts/common.sh@343 -- # case "$op" in 00:20:13.308 05:19:02 -- scripts/common.sh@344 -- # : 1 00:20:13.308 05:19:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:13.308 05:19:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:13.308 05:19:02 -- scripts/common.sh@364 -- # decimal 1 00:20:13.308 05:19:02 -- scripts/common.sh@352 -- # local d=1 00:20:13.308 05:19:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:13.308 05:19:02 -- scripts/common.sh@354 -- # echo 1 00:20:13.308 05:19:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:13.308 05:19:02 -- scripts/common.sh@365 -- # decimal 2 00:20:13.308 05:19:02 -- scripts/common.sh@352 -- # local d=2 00:20:13.308 05:19:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:13.308 05:19:02 -- scripts/common.sh@354 -- # echo 2 00:20:13.308 05:19:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:13.308 05:19:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:13.308 05:19:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:13.308 05:19:02 -- scripts/common.sh@367 -- # return 0 00:20:13.308 05:19:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:13.308 05:19:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:13.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.308 --rc genhtml_branch_coverage=1 00:20:13.308 --rc genhtml_function_coverage=1 00:20:13.308 --rc genhtml_legend=1 00:20:13.308 --rc geninfo_all_blocks=1 00:20:13.308 --rc geninfo_unexecuted_blocks=1 00:20:13.308 00:20:13.308 ' 00:20:13.308 05:19:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:13.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.308 --rc genhtml_branch_coverage=1 00:20:13.308 --rc genhtml_function_coverage=1 00:20:13.308 --rc genhtml_legend=1 00:20:13.308 --rc geninfo_all_blocks=1 00:20:13.308 --rc geninfo_unexecuted_blocks=1 00:20:13.308 00:20:13.308 ' 00:20:13.308 05:19:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:13.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.308 --rc genhtml_branch_coverage=1 00:20:13.308 --rc genhtml_function_coverage=1 00:20:13.308 --rc genhtml_legend=1 00:20:13.308 --rc geninfo_all_blocks=1 00:20:13.308 --rc geninfo_unexecuted_blocks=1 00:20:13.308 00:20:13.308 ' 00:20:13.308 05:19:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:13.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.308 --rc genhtml_branch_coverage=1 00:20:13.308 --rc genhtml_function_coverage=1 00:20:13.308 --rc genhtml_legend=1 00:20:13.308 --rc geninfo_all_blocks=1 00:20:13.308 --rc geninfo_unexecuted_blocks=1 00:20:13.308 00:20:13.308 ' 00:20:13.308 05:19:02 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:13.308 05:19:02 -- nvmf/common.sh@7 -- # uname -s 00:20:13.308 05:19:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:13.308 05:19:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:13.308 05:19:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:13.308 05:19:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:13.308 05:19:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:13.308 05:19:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:13.308 05:19:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:13.308 05:19:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:13.308 05:19:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:13.308 05:19:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:13.308 05:19:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:20:13.308 05:19:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:20:13.308 05:19:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:13.308 05:19:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:13.308 05:19:02 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:13.308 05:19:02 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:13.308 05:19:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:13.308 05:19:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:13.308 05:19:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:13.308 05:19:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.308 05:19:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.308 05:19:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.308 05:19:02 -- paths/export.sh@5 -- # export PATH 00:20:13.308 05:19:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.308 05:19:02 -- nvmf/common.sh@46 -- # : 0 00:20:13.308 05:19:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:13.308 05:19:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:13.309 05:19:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:13.309 05:19:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:13.309 05:19:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:13.309 05:19:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:13.309 05:19:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:13.309 05:19:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:13.309 05:19:02 -- target/dif.sh@15 -- # NULL_META=16 00:20:13.309 05:19:02 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:20:13.309 05:19:02 -- target/dif.sh@15 -- # NULL_SIZE=64 00:20:13.309 05:19:02 -- target/dif.sh@15 -- # NULL_DIF=1 00:20:13.309 05:19:02 -- target/dif.sh@135 -- # nvmftestinit 00:20:13.309 05:19:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:13.309 05:19:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:13.309 05:19:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:13.309 05:19:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:13.309 05:19:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:13.309 05:19:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:13.309 05:19:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:13.309 05:19:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:13.309 05:19:02 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:13.309 05:19:03 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:13.309 05:19:03 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:13.309 05:19:03 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:13.309 05:19:03 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:13.309 05:19:03 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:13.309 05:19:03 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:13.309 05:19:03 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:13.309 05:19:03 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:13.309 05:19:03 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:13.309 05:19:03 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:13.309 05:19:03 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:13.309 05:19:03 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:13.309 05:19:03 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:13.309 05:19:03 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:13.309 05:19:03 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:13.309 05:19:03 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:13.309 05:19:03 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:13.309 05:19:03 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:13.309 05:19:03 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:13.309 Cannot find device "nvmf_tgt_br" 00:20:13.309 05:19:03 -- nvmf/common.sh@154 -- # true 00:20:13.309 05:19:03 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:13.309 Cannot find device "nvmf_tgt_br2" 00:20:13.309 05:19:03 -- nvmf/common.sh@155 -- # true 00:20:13.309 05:19:03 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:13.309 05:19:03 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:13.309 Cannot find device "nvmf_tgt_br" 00:20:13.309 05:19:03 -- nvmf/common.sh@157 -- # true 00:20:13.309 05:19:03 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:13.309 Cannot find device "nvmf_tgt_br2" 00:20:13.309 05:19:03 -- nvmf/common.sh@158 -- # true 00:20:13.309 05:19:03 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:13.568 05:19:03 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:13.568 05:19:03 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:13.568 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:13.568 05:19:03 -- nvmf/common.sh@161 -- # true 00:20:13.568 05:19:03 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:13.568 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:13.568 05:19:03 -- nvmf/common.sh@162 -- # true 00:20:13.568 05:19:03 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:13.568 05:19:03 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:13.568 05:19:03 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:13.568 05:19:03 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:13.568 05:19:03 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:13.568 05:19:03 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:13.568 05:19:03 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:13.568 05:19:03 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:13.568 05:19:03 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:13.568 05:19:03 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:13.568 05:19:03 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:13.568 05:19:03 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:13.568 05:19:03 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:13.568 05:19:03 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:13.568 05:19:03 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:13.568 05:19:03 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:13.568 05:19:03 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:13.568 05:19:03 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:13.568 05:19:03 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:13.568 05:19:03 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:13.568 05:19:03 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:13.568 05:19:03 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:13.568 05:19:03 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:13.568 05:19:03 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:13.568 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:13.568 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:20:13.568 00:20:13.568 --- 10.0.0.2 ping statistics --- 00:20:13.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.568 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:20:13.568 05:19:03 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:13.568 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:13.568 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:20:13.568 00:20:13.568 --- 10.0.0.3 ping statistics --- 00:20:13.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.568 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:20:13.568 05:19:03 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:13.568 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:13.568 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:20:13.568 00:20:13.568 --- 10.0.0.1 ping statistics --- 00:20:13.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.568 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:20:13.568 05:19:03 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:13.568 05:19:03 -- nvmf/common.sh@421 -- # return 0 00:20:13.568 05:19:03 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:20:13.568 05:19:03 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:14.136 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:14.136 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:14.136 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:14.136 05:19:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:14.136 05:19:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:14.136 05:19:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:14.136 05:19:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:14.136 05:19:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:14.136 05:19:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:14.136 05:19:03 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:20:14.136 05:19:03 -- target/dif.sh@137 -- # nvmfappstart 00:20:14.136 05:19:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:14.136 05:19:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:14.136 05:19:03 -- common/autotest_common.sh@10 -- # set +x 00:20:14.136 05:19:03 -- nvmf/common.sh@469 -- # nvmfpid=86656 00:20:14.136 05:19:03 -- nvmf/common.sh@470 -- # waitforlisten 86656 00:20:14.136 05:19:03 -- common/autotest_common.sh@829 -- # '[' -z 86656 ']' 00:20:14.136 05:19:03 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:14.136 05:19:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:14.136 05:19:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:14.136 05:19:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:14.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:14.136 05:19:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:14.136 05:19:03 -- common/autotest_common.sh@10 -- # set +x 00:20:14.136 [2024-12-08 05:19:03.782029] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:14.136 [2024-12-08 05:19:03.782145] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:14.136 [2024-12-08 05:19:03.920792] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.396 [2024-12-08 05:19:03.958646] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:14.396 [2024-12-08 05:19:03.958854] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:14.396 [2024-12-08 05:19:03.958878] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:14.396 [2024-12-08 05:19:03.958889] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:14.396 [2024-12-08 05:19:03.958923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:14.396 05:19:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:14.396 05:19:04 -- common/autotest_common.sh@862 -- # return 0 00:20:14.396 05:19:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:14.396 05:19:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:14.396 05:19:04 -- common/autotest_common.sh@10 -- # set +x 00:20:14.396 05:19:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:14.396 05:19:04 -- target/dif.sh@139 -- # create_transport 00:20:14.396 05:19:04 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:20:14.396 05:19:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.396 05:19:04 -- common/autotest_common.sh@10 -- # set +x 00:20:14.396 [2024-12-08 05:19:04.074928] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:14.396 05:19:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.396 05:19:04 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:20:14.396 05:19:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:14.396 05:19:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:14.396 05:19:04 -- common/autotest_common.sh@10 -- # set +x 00:20:14.396 ************************************ 00:20:14.396 START TEST fio_dif_1_default 00:20:14.396 ************************************ 00:20:14.396 05:19:04 -- common/autotest_common.sh@1114 -- # fio_dif_1 00:20:14.396 05:19:04 -- target/dif.sh@86 -- # create_subsystems 0 00:20:14.396 05:19:04 -- target/dif.sh@28 -- # local sub 00:20:14.396 05:19:04 -- target/dif.sh@30 -- # for sub in "$@" 00:20:14.396 05:19:04 -- target/dif.sh@31 -- # create_subsystem 0 00:20:14.396 05:19:04 -- target/dif.sh@18 -- # local sub_id=0 00:20:14.396 05:19:04 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:14.396 05:19:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.396 05:19:04 -- common/autotest_common.sh@10 -- # set +x 00:20:14.396 bdev_null0 00:20:14.396 05:19:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.396 05:19:04 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:14.396 05:19:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.396 05:19:04 -- common/autotest_common.sh@10 -- # set +x 00:20:14.396 05:19:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.396 05:19:04 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:14.396 05:19:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.396 05:19:04 -- common/autotest_common.sh@10 -- # set +x 00:20:14.396 05:19:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.396 05:19:04 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:14.396 05:19:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.396 05:19:04 -- common/autotest_common.sh@10 -- # set +x 00:20:14.396 [2024-12-08 05:19:04.119084] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:14.396 05:19:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.396 05:19:04 -- target/dif.sh@87 -- # fio /dev/fd/62 00:20:14.396 05:19:04 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:20:14.396 05:19:04 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:14.396 05:19:04 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:14.396 05:19:04 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:14.396 05:19:04 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:20:14.396 05:19:04 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:14.396 05:19:04 -- target/dif.sh@82 -- # gen_fio_conf 00:20:14.396 05:19:04 -- nvmf/common.sh@520 -- # config=() 00:20:14.396 05:19:04 -- common/autotest_common.sh@1328 -- # local sanitizers 00:20:14.396 05:19:04 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:14.396 05:19:04 -- target/dif.sh@54 -- # local file 00:20:14.396 05:19:04 -- common/autotest_common.sh@1330 -- # shift 00:20:14.396 05:19:04 -- nvmf/common.sh@520 -- # local subsystem config 00:20:14.396 05:19:04 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:20:14.396 05:19:04 -- target/dif.sh@56 -- # cat 00:20:14.396 05:19:04 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:14.396 05:19:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:14.396 05:19:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:14.396 { 00:20:14.396 "params": { 00:20:14.396 "name": "Nvme$subsystem", 00:20:14.396 "trtype": "$TEST_TRANSPORT", 00:20:14.396 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:14.396 "adrfam": "ipv4", 00:20:14.396 "trsvcid": "$NVMF_PORT", 00:20:14.396 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:14.396 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:14.396 "hdgst": ${hdgst:-false}, 00:20:14.396 "ddgst": ${ddgst:-false} 00:20:14.396 }, 00:20:14.396 "method": "bdev_nvme_attach_controller" 00:20:14.396 } 00:20:14.396 EOF 00:20:14.396 )") 00:20:14.396 05:19:04 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:14.396 05:19:04 -- nvmf/common.sh@542 -- # cat 00:20:14.396 05:19:04 -- common/autotest_common.sh@1334 -- # grep libasan 00:20:14.396 05:19:04 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:14.396 05:19:04 -- target/dif.sh@72 -- # (( file = 1 )) 00:20:14.396 05:19:04 -- target/dif.sh@72 -- # (( file <= files )) 00:20:14.396 05:19:04 -- nvmf/common.sh@544 -- # jq . 00:20:14.396 05:19:04 -- nvmf/common.sh@545 -- # IFS=, 00:20:14.396 05:19:04 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:14.396 "params": { 00:20:14.396 "name": "Nvme0", 00:20:14.396 "trtype": "tcp", 00:20:14.396 "traddr": "10.0.0.2", 00:20:14.396 "adrfam": "ipv4", 00:20:14.396 "trsvcid": "4420", 00:20:14.396 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:14.396 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:14.396 "hdgst": false, 00:20:14.396 "ddgst": false 00:20:14.396 }, 00:20:14.396 "method": "bdev_nvme_attach_controller" 00:20:14.396 }' 00:20:14.396 05:19:04 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:14.396 05:19:04 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:14.396 05:19:04 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:14.396 05:19:04 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:14.396 05:19:04 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:20:14.396 05:19:04 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:14.655 05:19:04 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:14.655 05:19:04 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:14.655 05:19:04 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:14.655 05:19:04 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:14.655 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:14.655 fio-3.35 00:20:14.655 Starting 1 thread 00:20:14.913 [2024-12-08 05:19:04.638498] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:20:14.913 [2024-12-08 05:19:04.638583] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:20:27.108 00:20:27.108 filename0: (groupid=0, jobs=1): err= 0: pid=86710: Sun Dec 8 05:19:14 2024 00:20:27.108 read: IOPS=8708, BW=34.0MiB/s (35.7MB/s)(340MiB/10001msec) 00:20:27.108 slat (nsec): min=6903, max=50504, avg=8635.85, stdev=1938.96 00:20:27.108 clat (usec): min=375, max=3312, avg=433.94, stdev=28.67 00:20:27.108 lat (usec): min=382, max=3344, avg=442.58, stdev=29.10 00:20:27.108 clat percentiles (usec): 00:20:27.108 | 1.00th=[ 404], 5.00th=[ 408], 10.00th=[ 412], 20.00th=[ 420], 00:20:27.108 | 30.00th=[ 424], 40.00th=[ 429], 50.00th=[ 433], 60.00th=[ 437], 00:20:27.108 | 70.00th=[ 441], 80.00th=[ 445], 90.00th=[ 453], 95.00th=[ 461], 00:20:27.108 | 99.00th=[ 490], 99.50th=[ 545], 99.90th=[ 578], 99.95th=[ 594], 00:20:27.108 | 99.99th=[ 1352] 00:20:27.108 bw ( KiB/s): min=33408, max=35072, per=100.00%, avg=34863.16, stdev=378.98, samples=19 00:20:27.108 iops : min= 8352, max= 8768, avg=8715.79, stdev=94.75, samples=19 00:20:27.108 lat (usec) : 500=99.13%, 750=0.86% 00:20:27.108 lat (msec) : 2=0.01%, 4=0.01% 00:20:27.108 cpu : usr=86.27%, sys=11.91%, ctx=26, majf=0, minf=0 00:20:27.108 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:27.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.108 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.108 issued rwts: total=87092,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:27.108 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:27.108 00:20:27.108 Run status group 0 (all jobs): 00:20:27.108 READ: bw=34.0MiB/s (35.7MB/s), 34.0MiB/s-34.0MiB/s (35.7MB/s-35.7MB/s), io=340MiB (357MB), run=10001-10001msec 00:20:27.108 05:19:14 -- target/dif.sh@88 -- # destroy_subsystems 0 00:20:27.108 05:19:14 -- target/dif.sh@43 -- # local sub 00:20:27.108 05:19:14 -- target/dif.sh@45 -- # for sub in "$@" 00:20:27.108 05:19:14 -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:27.108 05:19:14 -- target/dif.sh@36 -- # local sub_id=0 00:20:27.108 05:19:14 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:27.108 05:19:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.108 05:19:14 -- common/autotest_common.sh@10 -- # set +x 00:20:27.108 05:19:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.108 05:19:14 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:27.108 05:19:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.108 05:19:14 -- common/autotest_common.sh@10 -- # set +x 00:20:27.108 05:19:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.108 00:20:27.108 real 0m10.818s 00:20:27.108 user 0m9.118s 00:20:27.108 sys 0m1.416s 00:20:27.108 05:19:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:27.108 ************************************ 00:20:27.108 END TEST fio_dif_1_default 00:20:27.108 ************************************ 00:20:27.108 05:19:14 -- common/autotest_common.sh@10 -- # set +x 00:20:27.108 05:19:14 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:20:27.108 05:19:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:27.108 05:19:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:27.108 05:19:14 -- common/autotest_common.sh@10 -- # set +x 00:20:27.108 ************************************ 00:20:27.108 START TEST fio_dif_1_multi_subsystems 00:20:27.108 ************************************ 00:20:27.108 05:19:14 -- common/autotest_common.sh@1114 -- # fio_dif_1_multi_subsystems 00:20:27.108 05:19:14 -- target/dif.sh@92 -- # local files=1 00:20:27.108 05:19:14 -- target/dif.sh@94 -- # create_subsystems 0 1 00:20:27.108 05:19:14 -- target/dif.sh@28 -- # local sub 00:20:27.108 05:19:14 -- target/dif.sh@30 -- # for sub in "$@" 00:20:27.108 05:19:14 -- target/dif.sh@31 -- # create_subsystem 0 00:20:27.108 05:19:14 -- target/dif.sh@18 -- # local sub_id=0 00:20:27.108 05:19:14 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:27.108 05:19:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.108 05:19:14 -- common/autotest_common.sh@10 -- # set +x 00:20:27.108 bdev_null0 00:20:27.108 05:19:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.108 05:19:14 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:27.108 05:19:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.108 05:19:14 -- common/autotest_common.sh@10 -- # set +x 00:20:27.108 05:19:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.108 05:19:14 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:27.108 05:19:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.108 05:19:14 -- common/autotest_common.sh@10 -- # set +x 00:20:27.108 05:19:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.108 05:19:14 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:27.108 05:19:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.108 05:19:14 -- common/autotest_common.sh@10 -- # set +x 00:20:27.108 [2024-12-08 05:19:14.986037] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:27.108 05:19:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.108 05:19:14 -- target/dif.sh@30 -- # for sub in "$@" 00:20:27.108 05:19:14 -- target/dif.sh@31 -- # create_subsystem 1 00:20:27.108 05:19:14 -- target/dif.sh@18 -- # local sub_id=1 00:20:27.108 05:19:14 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:27.108 05:19:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.108 05:19:14 -- common/autotest_common.sh@10 -- # set +x 00:20:27.108 bdev_null1 00:20:27.108 05:19:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.108 05:19:14 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:27.108 05:19:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.108 05:19:14 -- common/autotest_common.sh@10 -- # set +x 00:20:27.108 05:19:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.108 05:19:15 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:27.108 05:19:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.109 05:19:15 -- common/autotest_common.sh@10 -- # set +x 00:20:27.109 05:19:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.109 05:19:15 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:27.109 05:19:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.109 05:19:15 -- common/autotest_common.sh@10 -- # set +x 00:20:27.109 05:19:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.109 05:19:15 -- target/dif.sh@95 -- # fio /dev/fd/62 00:20:27.109 05:19:15 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:20:27.109 05:19:15 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:27.109 05:19:15 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:27.109 05:19:15 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:27.109 05:19:15 -- target/dif.sh@82 -- # gen_fio_conf 00:20:27.109 05:19:15 -- nvmf/common.sh@520 -- # config=() 00:20:27.109 05:19:15 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:20:27.109 05:19:15 -- target/dif.sh@54 -- # local file 00:20:27.109 05:19:15 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:27.109 05:19:15 -- nvmf/common.sh@520 -- # local subsystem config 00:20:27.109 05:19:15 -- target/dif.sh@56 -- # cat 00:20:27.109 05:19:15 -- common/autotest_common.sh@1328 -- # local sanitizers 00:20:27.109 05:19:15 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:27.109 05:19:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:27.109 05:19:15 -- common/autotest_common.sh@1330 -- # shift 00:20:27.109 05:19:15 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:20:27.109 05:19:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:27.109 { 00:20:27.109 "params": { 00:20:27.109 "name": "Nvme$subsystem", 00:20:27.109 "trtype": "$TEST_TRANSPORT", 00:20:27.109 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.109 "adrfam": "ipv4", 00:20:27.109 "trsvcid": "$NVMF_PORT", 00:20:27.109 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.109 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.109 "hdgst": ${hdgst:-false}, 00:20:27.109 "ddgst": ${ddgst:-false} 00:20:27.109 }, 00:20:27.109 "method": "bdev_nvme_attach_controller" 00:20:27.109 } 00:20:27.109 EOF 00:20:27.109 )") 00:20:27.109 05:19:15 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:27.109 05:19:15 -- nvmf/common.sh@542 -- # cat 00:20:27.109 05:19:15 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:27.109 05:19:15 -- target/dif.sh@72 -- # (( file = 1 )) 00:20:27.109 05:19:15 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:27.109 05:19:15 -- target/dif.sh@72 -- # (( file <= files )) 00:20:27.109 05:19:15 -- target/dif.sh@73 -- # cat 00:20:27.109 05:19:15 -- common/autotest_common.sh@1334 -- # grep libasan 00:20:27.109 05:19:15 -- target/dif.sh@72 -- # (( file++ )) 00:20:27.109 05:19:15 -- target/dif.sh@72 -- # (( file <= files )) 00:20:27.109 05:19:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:27.109 05:19:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:27.109 { 00:20:27.109 "params": { 00:20:27.109 "name": "Nvme$subsystem", 00:20:27.109 "trtype": "$TEST_TRANSPORT", 00:20:27.109 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.109 "adrfam": "ipv4", 00:20:27.109 "trsvcid": "$NVMF_PORT", 00:20:27.109 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.109 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.109 "hdgst": ${hdgst:-false}, 00:20:27.109 "ddgst": ${ddgst:-false} 00:20:27.109 }, 00:20:27.109 "method": "bdev_nvme_attach_controller" 00:20:27.109 } 00:20:27.109 EOF 00:20:27.109 )") 00:20:27.109 05:19:15 -- nvmf/common.sh@542 -- # cat 00:20:27.109 05:19:15 -- nvmf/common.sh@544 -- # jq . 00:20:27.109 05:19:15 -- nvmf/common.sh@545 -- # IFS=, 00:20:27.109 05:19:15 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:27.109 "params": { 00:20:27.109 "name": "Nvme0", 00:20:27.109 "trtype": "tcp", 00:20:27.109 "traddr": "10.0.0.2", 00:20:27.109 "adrfam": "ipv4", 00:20:27.109 "trsvcid": "4420", 00:20:27.109 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:27.109 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:27.109 "hdgst": false, 00:20:27.109 "ddgst": false 00:20:27.109 }, 00:20:27.109 "method": "bdev_nvme_attach_controller" 00:20:27.109 },{ 00:20:27.109 "params": { 00:20:27.109 "name": "Nvme1", 00:20:27.109 "trtype": "tcp", 00:20:27.109 "traddr": "10.0.0.2", 00:20:27.109 "adrfam": "ipv4", 00:20:27.109 "trsvcid": "4420", 00:20:27.109 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:27.109 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:27.109 "hdgst": false, 00:20:27.109 "ddgst": false 00:20:27.109 }, 00:20:27.109 "method": "bdev_nvme_attach_controller" 00:20:27.109 }' 00:20:27.109 05:19:15 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:27.109 05:19:15 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:27.109 05:19:15 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:27.109 05:19:15 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:27.109 05:19:15 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:20:27.109 05:19:15 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:27.109 05:19:15 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:27.109 05:19:15 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:27.109 05:19:15 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:27.109 05:19:15 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:27.109 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:27.109 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:27.109 fio-3.35 00:20:27.109 Starting 2 threads 00:20:27.109 [2024-12-08 05:19:15.609486] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:20:27.109 [2024-12-08 05:19:15.609558] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:20:37.099 00:20:37.099 filename0: (groupid=0, jobs=1): err= 0: pid=86875: Sun Dec 8 05:19:25 2024 00:20:37.099 read: IOPS=4781, BW=18.7MiB/s (19.6MB/s)(187MiB/10001msec) 00:20:37.099 slat (nsec): min=7115, max=56245, avg=14101.41, stdev=3309.41 00:20:37.099 clat (usec): min=431, max=2804, avg=797.58, stdev=38.50 00:20:37.099 lat (usec): min=439, max=2829, avg=811.68, stdev=39.03 00:20:37.099 clat percentiles (usec): 00:20:37.099 | 1.00th=[ 742], 5.00th=[ 758], 10.00th=[ 766], 20.00th=[ 775], 00:20:37.099 | 30.00th=[ 783], 40.00th=[ 791], 50.00th=[ 791], 60.00th=[ 799], 00:20:37.099 | 70.00th=[ 807], 80.00th=[ 816], 90.00th=[ 832], 95.00th=[ 840], 00:20:37.099 | 99.00th=[ 922], 99.50th=[ 1020], 99.90th=[ 1090], 99.95th=[ 1106], 00:20:37.099 | 99.99th=[ 1221] 00:20:37.099 bw ( KiB/s): min=18496, max=19296, per=50.02%, avg=19131.89, stdev=208.88, samples=19 00:20:37.099 iops : min= 4624, max= 4824, avg=4782.95, stdev=52.24, samples=19 00:20:37.099 lat (usec) : 500=0.02%, 750=2.06%, 1000=97.27% 00:20:37.099 lat (msec) : 2=0.65%, 4=0.01% 00:20:37.099 cpu : usr=88.79%, sys=9.87%, ctx=8, majf=0, minf=0 00:20:37.099 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:37.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.099 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.099 issued rwts: total=47816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.099 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:37.099 filename1: (groupid=0, jobs=1): err= 0: pid=86876: Sun Dec 8 05:19:25 2024 00:20:37.099 read: IOPS=4780, BW=18.7MiB/s (19.6MB/s)(187MiB/10001msec) 00:20:37.099 slat (nsec): min=5429, max=65873, avg=13826.64, stdev=3219.95 00:20:37.099 clat (usec): min=671, max=3855, avg=799.59, stdev=48.72 00:20:37.099 lat (usec): min=679, max=3879, avg=813.42, stdev=49.14 00:20:37.099 clat percentiles (usec): 00:20:37.099 | 1.00th=[ 709], 5.00th=[ 734], 10.00th=[ 758], 20.00th=[ 775], 00:20:37.099 | 30.00th=[ 783], 40.00th=[ 791], 50.00th=[ 799], 60.00th=[ 807], 00:20:37.099 | 70.00th=[ 816], 80.00th=[ 824], 90.00th=[ 840], 95.00th=[ 848], 00:20:37.099 | 99.00th=[ 922], 99.50th=[ 1037], 99.90th=[ 1106], 99.95th=[ 1123], 00:20:37.099 | 99.99th=[ 1205] 00:20:37.099 bw ( KiB/s): min=18496, max=19296, per=50.02%, avg=19129.95, stdev=212.09, samples=19 00:20:37.099 iops : min= 4624, max= 4824, avg=4782.47, stdev=53.02, samples=19 00:20:37.099 lat (usec) : 750=7.97%, 1000=91.28% 00:20:37.099 lat (msec) : 2=0.74%, 4=0.01% 00:20:37.099 cpu : usr=88.52%, sys=10.08%, ctx=6, majf=0, minf=0 00:20:37.099 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:37.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.099 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.099 issued rwts: total=47808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.099 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:37.099 00:20:37.099 Run status group 0 (all jobs): 00:20:37.099 READ: bw=37.3MiB/s (39.2MB/s), 18.7MiB/s-18.7MiB/s (19.6MB/s-19.6MB/s), io=374MiB (392MB), run=10001-10001msec 00:20:37.099 05:19:25 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:20:37.099 05:19:25 -- target/dif.sh@43 -- # local sub 00:20:37.099 05:19:25 -- target/dif.sh@45 -- # for sub in "$@" 00:20:37.099 05:19:25 -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:37.099 05:19:25 -- target/dif.sh@36 -- # local sub_id=0 00:20:37.099 05:19:25 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:37.099 05:19:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.099 05:19:25 -- common/autotest_common.sh@10 -- # set +x 00:20:37.099 05:19:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.099 05:19:25 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:37.099 05:19:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.099 05:19:25 -- common/autotest_common.sh@10 -- # set +x 00:20:37.099 05:19:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.099 05:19:25 -- target/dif.sh@45 -- # for sub in "$@" 00:20:37.099 05:19:25 -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:37.099 05:19:25 -- target/dif.sh@36 -- # local sub_id=1 00:20:37.099 05:19:25 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:37.099 05:19:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.099 05:19:25 -- common/autotest_common.sh@10 -- # set +x 00:20:37.099 05:19:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.099 05:19:25 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:37.099 05:19:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.099 05:19:25 -- common/autotest_common.sh@10 -- # set +x 00:20:37.099 05:19:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.099 00:20:37.099 real 0m10.969s 00:20:37.099 user 0m18.388s 00:20:37.099 sys 0m2.220s 00:20:37.099 05:19:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:37.099 05:19:25 -- common/autotest_common.sh@10 -- # set +x 00:20:37.099 ************************************ 00:20:37.099 END TEST fio_dif_1_multi_subsystems 00:20:37.099 ************************************ 00:20:37.099 05:19:25 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:20:37.099 05:19:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:37.099 05:19:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:37.099 05:19:25 -- common/autotest_common.sh@10 -- # set +x 00:20:37.099 ************************************ 00:20:37.100 START TEST fio_dif_rand_params 00:20:37.100 ************************************ 00:20:37.100 05:19:25 -- common/autotest_common.sh@1114 -- # fio_dif_rand_params 00:20:37.100 05:19:25 -- target/dif.sh@100 -- # local NULL_DIF 00:20:37.100 05:19:25 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:20:37.100 05:19:25 -- target/dif.sh@103 -- # NULL_DIF=3 00:20:37.100 05:19:25 -- target/dif.sh@103 -- # bs=128k 00:20:37.100 05:19:25 -- target/dif.sh@103 -- # numjobs=3 00:20:37.100 05:19:25 -- target/dif.sh@103 -- # iodepth=3 00:20:37.100 05:19:25 -- target/dif.sh@103 -- # runtime=5 00:20:37.100 05:19:25 -- target/dif.sh@105 -- # create_subsystems 0 00:20:37.100 05:19:25 -- target/dif.sh@28 -- # local sub 00:20:37.100 05:19:25 -- target/dif.sh@30 -- # for sub in "$@" 00:20:37.100 05:19:25 -- target/dif.sh@31 -- # create_subsystem 0 00:20:37.100 05:19:25 -- target/dif.sh@18 -- # local sub_id=0 00:20:37.100 05:19:25 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:37.100 05:19:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.100 05:19:25 -- common/autotest_common.sh@10 -- # set +x 00:20:37.100 bdev_null0 00:20:37.100 05:19:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.100 05:19:25 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:37.100 05:19:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.100 05:19:25 -- common/autotest_common.sh@10 -- # set +x 00:20:37.100 05:19:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.100 05:19:25 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:37.100 05:19:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.100 05:19:25 -- common/autotest_common.sh@10 -- # set +x 00:20:37.100 05:19:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.100 05:19:25 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:37.100 05:19:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.100 05:19:25 -- common/autotest_common.sh@10 -- # set +x 00:20:37.100 [2024-12-08 05:19:26.003515] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:37.100 05:19:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.100 05:19:26 -- target/dif.sh@106 -- # fio /dev/fd/62 00:20:37.100 05:19:26 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:20:37.100 05:19:26 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:37.100 05:19:26 -- nvmf/common.sh@520 -- # config=() 00:20:37.100 05:19:26 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:37.100 05:19:26 -- nvmf/common.sh@520 -- # local subsystem config 00:20:37.100 05:19:26 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:37.100 05:19:26 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:37.100 05:19:26 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:20:37.100 05:19:26 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:37.100 { 00:20:37.100 "params": { 00:20:37.100 "name": "Nvme$subsystem", 00:20:37.100 "trtype": "$TEST_TRANSPORT", 00:20:37.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.100 "adrfam": "ipv4", 00:20:37.100 "trsvcid": "$NVMF_PORT", 00:20:37.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.100 "hdgst": ${hdgst:-false}, 00:20:37.100 "ddgst": ${ddgst:-false} 00:20:37.100 }, 00:20:37.100 "method": "bdev_nvme_attach_controller" 00:20:37.100 } 00:20:37.100 EOF 00:20:37.100 )") 00:20:37.100 05:19:26 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:37.100 05:19:26 -- common/autotest_common.sh@1328 -- # local sanitizers 00:20:37.100 05:19:26 -- target/dif.sh@82 -- # gen_fio_conf 00:20:37.100 05:19:26 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:37.100 05:19:26 -- common/autotest_common.sh@1330 -- # shift 00:20:37.100 05:19:26 -- target/dif.sh@54 -- # local file 00:20:37.100 05:19:26 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:20:37.100 05:19:26 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:37.100 05:19:26 -- target/dif.sh@56 -- # cat 00:20:37.100 05:19:26 -- nvmf/common.sh@542 -- # cat 00:20:37.100 05:19:26 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:37.100 05:19:26 -- common/autotest_common.sh@1334 -- # grep libasan 00:20:37.100 05:19:26 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:37.100 05:19:26 -- target/dif.sh@72 -- # (( file = 1 )) 00:20:37.100 05:19:26 -- target/dif.sh@72 -- # (( file <= files )) 00:20:37.100 05:19:26 -- nvmf/common.sh@544 -- # jq . 00:20:37.100 05:19:26 -- nvmf/common.sh@545 -- # IFS=, 00:20:37.100 05:19:26 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:37.100 "params": { 00:20:37.100 "name": "Nvme0", 00:20:37.100 "trtype": "tcp", 00:20:37.100 "traddr": "10.0.0.2", 00:20:37.100 "adrfam": "ipv4", 00:20:37.100 "trsvcid": "4420", 00:20:37.100 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:37.100 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:37.100 "hdgst": false, 00:20:37.100 "ddgst": false 00:20:37.100 }, 00:20:37.100 "method": "bdev_nvme_attach_controller" 00:20:37.100 }' 00:20:37.100 05:19:26 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:37.100 05:19:26 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:37.100 05:19:26 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:37.100 05:19:26 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:37.100 05:19:26 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:20:37.100 05:19:26 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:37.100 05:19:26 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:37.100 05:19:26 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:37.100 05:19:26 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:37.100 05:19:26 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:37.100 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:37.100 ... 00:20:37.100 fio-3.35 00:20:37.100 Starting 3 threads 00:20:37.100 [2024-12-08 05:19:26.536081] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:20:37.100 [2024-12-08 05:19:26.536146] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:20:42.367 00:20:42.367 filename0: (groupid=0, jobs=1): err= 0: pid=87034: Sun Dec 8 05:19:31 2024 00:20:42.367 read: IOPS=254, BW=31.8MiB/s (33.4MB/s)(159MiB/5009msec) 00:20:42.367 slat (nsec): min=8010, max=43605, avg=15237.07, stdev=3869.23 00:20:42.367 clat (usec): min=10125, max=17556, avg=11750.85, stdev=847.61 00:20:42.367 lat (usec): min=10139, max=17573, avg=11766.08, stdev=847.57 00:20:42.367 clat percentiles (usec): 00:20:42.367 | 1.00th=[11469], 5.00th=[11469], 10.00th=[11469], 20.00th=[11469], 00:20:42.367 | 30.00th=[11469], 40.00th=[11469], 50.00th=[11469], 60.00th=[11469], 00:20:42.367 | 70.00th=[11600], 80.00th=[11600], 90.00th=[11863], 95.00th=[14222], 00:20:42.367 | 99.00th=[15795], 99.50th=[16581], 99.90th=[17433], 99.95th=[17433], 00:20:42.367 | 99.99th=[17433] 00:20:42.367 bw ( KiB/s): min=27648, max=33792, per=33.33%, avg=32563.20, stdev=1743.81, samples=10 00:20:42.367 iops : min= 216, max= 264, avg=254.40, stdev=13.62, samples=10 00:20:42.367 lat (msec) : 20=100.00% 00:20:42.367 cpu : usr=91.99%, sys=7.41%, ctx=8, majf=0, minf=0 00:20:42.367 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:42.367 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.367 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.367 issued rwts: total=1275,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.367 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:42.367 filename0: (groupid=0, jobs=1): err= 0: pid=87035: Sun Dec 8 05:19:31 2024 00:20:42.367 read: IOPS=254, BW=31.8MiB/s (33.3MB/s)(159MiB/5011msec) 00:20:42.367 slat (nsec): min=7728, max=34209, avg=10772.39, stdev=4148.88 00:20:42.367 clat (usec): min=11396, max=17576, avg=11763.20, stdev=834.08 00:20:42.367 lat (usec): min=11404, max=17593, avg=11773.98, stdev=834.22 00:20:42.367 clat percentiles (usec): 00:20:42.367 | 1.00th=[11469], 5.00th=[11469], 10.00th=[11469], 20.00th=[11469], 00:20:42.367 | 30.00th=[11469], 40.00th=[11469], 50.00th=[11469], 60.00th=[11600], 00:20:42.367 | 70.00th=[11600], 80.00th=[11600], 90.00th=[11863], 95.00th=[14222], 00:20:42.368 | 99.00th=[15795], 99.50th=[17171], 99.90th=[17695], 99.95th=[17695], 00:20:42.368 | 99.99th=[17695] 00:20:42.368 bw ( KiB/s): min=27648, max=33792, per=33.33%, avg=32563.20, stdev=1743.81, samples=10 00:20:42.368 iops : min= 216, max= 264, avg=254.40, stdev=13.62, samples=10 00:20:42.368 lat (msec) : 20=100.00% 00:20:42.368 cpu : usr=91.32%, sys=8.10%, ctx=9, majf=0, minf=0 00:20:42.368 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:42.368 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.368 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.368 issued rwts: total=1275,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.368 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:42.368 filename0: (groupid=0, jobs=1): err= 0: pid=87036: Sun Dec 8 05:19:31 2024 00:20:42.368 read: IOPS=254, BW=31.8MiB/s (33.4MB/s)(159MiB/5010msec) 00:20:42.368 slat (nsec): min=7962, max=42652, avg=14737.23, stdev=3753.98 00:20:42.368 clat (usec): min=10122, max=17582, avg=11753.53, stdev=836.84 00:20:42.368 lat (usec): min=10136, max=17595, avg=11768.27, stdev=836.57 00:20:42.368 clat percentiles (usec): 00:20:42.368 | 1.00th=[11469], 5.00th=[11469], 10.00th=[11469], 20.00th=[11469], 00:20:42.368 | 30.00th=[11469], 40.00th=[11469], 50.00th=[11469], 60.00th=[11600], 00:20:42.368 | 70.00th=[11600], 80.00th=[11600], 90.00th=[11863], 95.00th=[14222], 00:20:42.368 | 99.00th=[15795], 99.50th=[16581], 99.90th=[17695], 99.95th=[17695], 00:20:42.368 | 99.99th=[17695] 00:20:42.368 bw ( KiB/s): min=27648, max=33792, per=33.33%, avg=32563.20, stdev=1743.81, samples=10 00:20:42.368 iops : min= 216, max= 264, avg=254.40, stdev=13.62, samples=10 00:20:42.368 lat (msec) : 20=100.00% 00:20:42.368 cpu : usr=92.27%, sys=7.11%, ctx=5, majf=0, minf=0 00:20:42.368 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:42.368 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.368 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.368 issued rwts: total=1275,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.368 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:42.368 00:20:42.368 Run status group 0 (all jobs): 00:20:42.368 READ: bw=95.4MiB/s (100MB/s), 31.8MiB/s-31.8MiB/s (33.3MB/s-33.4MB/s), io=478MiB (501MB), run=5009-5011msec 00:20:42.368 05:19:31 -- target/dif.sh@107 -- # destroy_subsystems 0 00:20:42.368 05:19:31 -- target/dif.sh@43 -- # local sub 00:20:42.368 05:19:31 -- target/dif.sh@45 -- # for sub in "$@" 00:20:42.368 05:19:31 -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:42.368 05:19:31 -- target/dif.sh@36 -- # local sub_id=0 00:20:42.368 05:19:31 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:42.368 05:19:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.368 05:19:31 -- common/autotest_common.sh@10 -- # set +x 00:20:42.368 05:19:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.368 05:19:31 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:42.368 05:19:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.368 05:19:31 -- common/autotest_common.sh@10 -- # set +x 00:20:42.368 05:19:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.368 05:19:31 -- target/dif.sh@109 -- # NULL_DIF=2 00:20:42.368 05:19:31 -- target/dif.sh@109 -- # bs=4k 00:20:42.368 05:19:31 -- target/dif.sh@109 -- # numjobs=8 00:20:42.368 05:19:31 -- target/dif.sh@109 -- # iodepth=16 00:20:42.368 05:19:31 -- target/dif.sh@109 -- # runtime= 00:20:42.368 05:19:31 -- target/dif.sh@109 -- # files=2 00:20:42.368 05:19:31 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:20:42.368 05:19:31 -- target/dif.sh@28 -- # local sub 00:20:42.368 05:19:31 -- target/dif.sh@30 -- # for sub in "$@" 00:20:42.368 05:19:31 -- target/dif.sh@31 -- # create_subsystem 0 00:20:42.368 05:19:31 -- target/dif.sh@18 -- # local sub_id=0 00:20:42.368 05:19:31 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:20:42.368 05:19:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.368 05:19:31 -- common/autotest_common.sh@10 -- # set +x 00:20:42.368 bdev_null0 00:20:42.368 05:19:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.368 05:19:31 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:42.368 05:19:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.368 05:19:31 -- common/autotest_common.sh@10 -- # set +x 00:20:42.368 05:19:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.368 05:19:31 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:42.368 05:19:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.368 05:19:31 -- common/autotest_common.sh@10 -- # set +x 00:20:42.368 05:19:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.368 05:19:31 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:42.368 05:19:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.368 05:19:31 -- common/autotest_common.sh@10 -- # set +x 00:20:42.368 [2024-12-08 05:19:31.841584] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:42.368 05:19:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.368 05:19:31 -- target/dif.sh@30 -- # for sub in "$@" 00:20:42.368 05:19:31 -- target/dif.sh@31 -- # create_subsystem 1 00:20:42.368 05:19:31 -- target/dif.sh@18 -- # local sub_id=1 00:20:42.368 05:19:31 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:20:42.368 05:19:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.368 05:19:31 -- common/autotest_common.sh@10 -- # set +x 00:20:42.368 bdev_null1 00:20:42.368 05:19:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.368 05:19:31 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:42.368 05:19:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.368 05:19:31 -- common/autotest_common.sh@10 -- # set +x 00:20:42.368 05:19:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.368 05:19:31 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:42.368 05:19:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.368 05:19:31 -- common/autotest_common.sh@10 -- # set +x 00:20:42.368 05:19:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.368 05:19:31 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:42.368 05:19:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.368 05:19:31 -- common/autotest_common.sh@10 -- # set +x 00:20:42.368 05:19:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.368 05:19:31 -- target/dif.sh@30 -- # for sub in "$@" 00:20:42.368 05:19:31 -- target/dif.sh@31 -- # create_subsystem 2 00:20:42.368 05:19:31 -- target/dif.sh@18 -- # local sub_id=2 00:20:42.368 05:19:31 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:20:42.368 05:19:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.368 05:19:31 -- common/autotest_common.sh@10 -- # set +x 00:20:42.368 bdev_null2 00:20:42.368 05:19:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.368 05:19:31 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:20:42.368 05:19:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.368 05:19:31 -- common/autotest_common.sh@10 -- # set +x 00:20:42.368 05:19:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.368 05:19:31 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:20:42.368 05:19:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.368 05:19:31 -- common/autotest_common.sh@10 -- # set +x 00:20:42.368 05:19:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.368 05:19:31 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:42.368 05:19:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.368 05:19:31 -- common/autotest_common.sh@10 -- # set +x 00:20:42.368 05:19:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.368 05:19:31 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:20:42.368 05:19:31 -- target/dif.sh@112 -- # fio /dev/fd/62 00:20:42.368 05:19:31 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:20:42.368 05:19:31 -- nvmf/common.sh@520 -- # config=() 00:20:42.368 05:19:31 -- nvmf/common.sh@520 -- # local subsystem config 00:20:42.368 05:19:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:42.368 05:19:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:42.368 { 00:20:42.368 "params": { 00:20:42.368 "name": "Nvme$subsystem", 00:20:42.368 "trtype": "$TEST_TRANSPORT", 00:20:42.368 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:42.368 "adrfam": "ipv4", 00:20:42.368 "trsvcid": "$NVMF_PORT", 00:20:42.368 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:42.368 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:42.368 "hdgst": ${hdgst:-false}, 00:20:42.368 "ddgst": ${ddgst:-false} 00:20:42.368 }, 00:20:42.369 "method": "bdev_nvme_attach_controller" 00:20:42.369 } 00:20:42.369 EOF 00:20:42.369 )") 00:20:42.369 05:19:31 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:42.369 05:19:31 -- target/dif.sh@82 -- # gen_fio_conf 00:20:42.369 05:19:31 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:42.369 05:19:31 -- target/dif.sh@54 -- # local file 00:20:42.369 05:19:31 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:20:42.369 05:19:31 -- target/dif.sh@56 -- # cat 00:20:42.369 05:19:31 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:42.369 05:19:31 -- nvmf/common.sh@542 -- # cat 00:20:42.369 05:19:31 -- common/autotest_common.sh@1328 -- # local sanitizers 00:20:42.369 05:19:31 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:42.369 05:19:31 -- common/autotest_common.sh@1330 -- # shift 00:20:42.369 05:19:31 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:20:42.369 05:19:31 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:42.369 05:19:31 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:42.369 05:19:31 -- common/autotest_common.sh@1334 -- # grep libasan 00:20:42.369 05:19:31 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:42.369 05:19:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:42.369 05:19:31 -- target/dif.sh@72 -- # (( file = 1 )) 00:20:42.369 05:19:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:42.369 { 00:20:42.369 "params": { 00:20:42.369 "name": "Nvme$subsystem", 00:20:42.369 "trtype": "$TEST_TRANSPORT", 00:20:42.369 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:42.369 "adrfam": "ipv4", 00:20:42.369 "trsvcid": "$NVMF_PORT", 00:20:42.369 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:42.369 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:42.369 "hdgst": ${hdgst:-false}, 00:20:42.369 "ddgst": ${ddgst:-false} 00:20:42.369 }, 00:20:42.369 "method": "bdev_nvme_attach_controller" 00:20:42.369 } 00:20:42.369 EOF 00:20:42.369 )") 00:20:42.369 05:19:31 -- target/dif.sh@72 -- # (( file <= files )) 00:20:42.369 05:19:31 -- target/dif.sh@73 -- # cat 00:20:42.369 05:19:31 -- nvmf/common.sh@542 -- # cat 00:20:42.369 05:19:31 -- target/dif.sh@72 -- # (( file++ )) 00:20:42.369 05:19:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:42.369 05:19:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:42.369 { 00:20:42.369 "params": { 00:20:42.369 "name": "Nvme$subsystem", 00:20:42.369 "trtype": "$TEST_TRANSPORT", 00:20:42.369 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:42.369 "adrfam": "ipv4", 00:20:42.369 "trsvcid": "$NVMF_PORT", 00:20:42.369 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:42.369 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:42.369 "hdgst": ${hdgst:-false}, 00:20:42.369 "ddgst": ${ddgst:-false} 00:20:42.369 }, 00:20:42.369 "method": "bdev_nvme_attach_controller" 00:20:42.369 } 00:20:42.369 EOF 00:20:42.369 )") 00:20:42.369 05:19:31 -- target/dif.sh@72 -- # (( file <= files )) 00:20:42.369 05:19:31 -- target/dif.sh@73 -- # cat 00:20:42.369 05:19:31 -- nvmf/common.sh@542 -- # cat 00:20:42.369 05:19:31 -- target/dif.sh@72 -- # (( file++ )) 00:20:42.369 05:19:31 -- target/dif.sh@72 -- # (( file <= files )) 00:20:42.369 05:19:31 -- nvmf/common.sh@544 -- # jq . 00:20:42.369 05:19:31 -- nvmf/common.sh@545 -- # IFS=, 00:20:42.369 05:19:31 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:42.369 "params": { 00:20:42.369 "name": "Nvme0", 00:20:42.369 "trtype": "tcp", 00:20:42.369 "traddr": "10.0.0.2", 00:20:42.369 "adrfam": "ipv4", 00:20:42.369 "trsvcid": "4420", 00:20:42.369 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:42.369 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:42.369 "hdgst": false, 00:20:42.369 "ddgst": false 00:20:42.369 }, 00:20:42.369 "method": "bdev_nvme_attach_controller" 00:20:42.369 },{ 00:20:42.369 "params": { 00:20:42.369 "name": "Nvme1", 00:20:42.369 "trtype": "tcp", 00:20:42.369 "traddr": "10.0.0.2", 00:20:42.369 "adrfam": "ipv4", 00:20:42.369 "trsvcid": "4420", 00:20:42.369 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:42.369 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:42.369 "hdgst": false, 00:20:42.369 "ddgst": false 00:20:42.369 }, 00:20:42.369 "method": "bdev_nvme_attach_controller" 00:20:42.369 },{ 00:20:42.369 "params": { 00:20:42.369 "name": "Nvme2", 00:20:42.369 "trtype": "tcp", 00:20:42.369 "traddr": "10.0.0.2", 00:20:42.369 "adrfam": "ipv4", 00:20:42.369 "trsvcid": "4420", 00:20:42.369 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:42.369 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:42.369 "hdgst": false, 00:20:42.369 "ddgst": false 00:20:42.369 }, 00:20:42.369 "method": "bdev_nvme_attach_controller" 00:20:42.369 }' 00:20:42.369 05:19:31 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:42.369 05:19:31 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:42.369 05:19:31 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:42.369 05:19:31 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:42.369 05:19:31 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:20:42.369 05:19:31 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:42.369 05:19:31 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:42.369 05:19:31 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:42.369 05:19:31 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:42.369 05:19:31 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:42.369 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:42.369 ... 00:20:42.369 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:42.369 ... 00:20:42.369 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:42.369 ... 00:20:42.369 fio-3.35 00:20:42.369 Starting 24 threads 00:20:42.936 [2024-12-08 05:19:32.570917] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:20:42.936 [2024-12-08 05:19:32.570998] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:20:55.184 00:20:55.184 filename0: (groupid=0, jobs=1): err= 0: pid=87125: Sun Dec 8 05:19:42 2024 00:20:55.184 read: IOPS=205, BW=823KiB/s (843kB/s)(8260KiB/10035msec) 00:20:55.184 slat (usec): min=5, max=8033, avg=23.00, stdev=249.43 00:20:55.184 clat (msec): min=19, max=155, avg=77.61, stdev=21.76 00:20:55.184 lat (msec): min=19, max=155, avg=77.63, stdev=21.77 00:20:55.184 clat percentiles (msec): 00:20:55.184 | 1.00th=[ 32], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 61], 00:20:55.184 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 81], 00:20:55.184 | 70.00th=[ 87], 80.00th=[ 99], 90.00th=[ 109], 95.00th=[ 116], 00:20:55.184 | 99.00th=[ 121], 99.50th=[ 123], 99.90th=[ 144], 99.95th=[ 148], 00:20:55.184 | 99.99th=[ 157] 00:20:55.184 bw ( KiB/s): min= 616, max= 1162, per=4.06%, avg=820.90, stdev=142.68, samples=20 00:20:55.184 iops : min= 154, max= 290, avg=205.20, stdev=35.61, samples=20 00:20:55.184 lat (msec) : 20=0.68%, 50=12.30%, 100=68.86%, 250=18.16% 00:20:55.184 cpu : usr=31.42%, sys=1.80%, ctx=847, majf=0, minf=9 00:20:55.184 IO depths : 1=0.1%, 2=0.2%, 4=0.9%, 8=81.9%, 16=16.9%, 32=0.0%, >=64=0.0% 00:20:55.184 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.184 complete : 0=0.0%, 4=88.0%, 8=11.8%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.184 issued rwts: total=2065,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:55.184 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:55.184 filename0: (groupid=0, jobs=1): err= 0: pid=87126: Sun Dec 8 05:19:42 2024 00:20:55.184 read: IOPS=218, BW=875KiB/s (896kB/s)(8756KiB/10003msec) 00:20:55.184 slat (usec): min=3, max=8026, avg=22.41, stdev=209.81 00:20:55.184 clat (msec): min=9, max=134, avg=73.02, stdev=22.85 00:20:55.184 lat (msec): min=9, max=134, avg=73.04, stdev=22.86 00:20:55.184 clat percentiles (msec): 00:20:55.184 | 1.00th=[ 18], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 50], 00:20:55.184 | 30.00th=[ 61], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 73], 00:20:55.184 | 70.00th=[ 82], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 117], 00:20:55.184 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 134], 99.95th=[ 134], 00:20:55.184 | 99.99th=[ 134] 00:20:55.184 bw ( KiB/s): min= 640, max= 1072, per=4.22%, avg=853.05, stdev=131.19, samples=19 00:20:55.184 iops : min= 160, max= 268, avg=213.26, stdev=32.80, samples=19 00:20:55.184 lat (msec) : 10=0.27%, 20=0.87%, 50=19.46%, 100=64.55%, 250=14.85% 00:20:55.184 cpu : usr=36.45%, sys=2.09%, ctx=1074, majf=0, minf=9 00:20:55.184 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.4%, 16=15.9%, 32=0.0%, >=64=0.0% 00:20:55.184 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.184 complete : 0=0.0%, 4=87.0%, 8=12.9%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.184 issued rwts: total=2189,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:55.184 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:55.184 filename0: (groupid=0, jobs=1): err= 0: pid=87127: Sun Dec 8 05:19:42 2024 00:20:55.184 read: IOPS=211, BW=846KiB/s (866kB/s)(8472KiB/10012msec) 00:20:55.184 slat (usec): min=5, max=8026, avg=26.10, stdev=289.46 00:20:55.184 clat (msec): min=16, max=155, avg=75.49, stdev=22.70 00:20:55.184 lat (msec): min=16, max=155, avg=75.52, stdev=22.69 00:20:55.184 clat percentiles (msec): 00:20:55.184 | 1.00th=[ 36], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 54], 00:20:55.184 | 30.00th=[ 62], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 78], 00:20:55.184 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 120], 00:20:55.184 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 144], 00:20:55.184 | 99.99th=[ 157] 00:20:55.184 bw ( KiB/s): min= 632, max= 1069, per=4.17%, avg=843.05, stdev=148.17, samples=20 00:20:55.184 iops : min= 158, max= 267, avg=210.75, stdev=37.02, samples=20 00:20:55.184 lat (msec) : 20=0.28%, 50=17.33%, 100=65.58%, 250=16.81% 00:20:55.184 cpu : usr=34.63%, sys=2.32%, ctx=988, majf=0, minf=9 00:20:55.184 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=82.7%, 16=16.4%, 32=0.0%, >=64=0.0% 00:20:55.184 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.184 complete : 0=0.0%, 4=87.5%, 8=12.4%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.184 issued rwts: total=2118,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:55.184 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:55.184 filename0: (groupid=0, jobs=1): err= 0: pid=87128: Sun Dec 8 05:19:42 2024 00:20:55.184 read: IOPS=217, BW=871KiB/s (892kB/s)(8716KiB/10003msec) 00:20:55.184 slat (usec): min=4, max=8026, avg=23.05, stdev=202.21 00:20:55.184 clat (msec): min=3, max=131, avg=73.34, stdev=22.79 00:20:55.184 lat (msec): min=3, max=131, avg=73.36, stdev=22.79 00:20:55.184 clat percentiles (msec): 00:20:55.185 | 1.00th=[ 11], 5.00th=[ 44], 10.00th=[ 47], 20.00th=[ 54], 00:20:55.185 | 30.00th=[ 61], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 77], 00:20:55.185 | 70.00th=[ 83], 80.00th=[ 96], 90.00th=[ 106], 95.00th=[ 115], 00:20:55.185 | 99.00th=[ 121], 99.50th=[ 123], 99.90th=[ 132], 99.95th=[ 132], 00:20:55.185 | 99.99th=[ 132] 00:20:55.185 bw ( KiB/s): min= 664, max= 1072, per=4.18%, avg=845.11, stdev=131.66, samples=19 00:20:55.185 iops : min= 166, max= 268, avg=211.26, stdev=32.91, samples=19 00:20:55.185 lat (msec) : 4=0.41%, 10=0.55%, 20=0.73%, 50=14.92%, 100=67.28% 00:20:55.185 lat (msec) : 250=16.11% 00:20:55.185 cpu : usr=40.66%, sys=2.52%, ctx=1413, majf=0, minf=9 00:20:55.185 IO depths : 1=0.1%, 2=0.6%, 4=2.2%, 8=81.7%, 16=15.6%, 32=0.0%, >=64=0.0% 00:20:55.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.185 complete : 0=0.0%, 4=87.3%, 8=12.2%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.185 issued rwts: total=2179,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:55.185 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:55.185 filename0: (groupid=0, jobs=1): err= 0: pid=87129: Sun Dec 8 05:19:42 2024 00:20:55.185 read: IOPS=208, BW=835KiB/s (855kB/s)(8360KiB/10010msec) 00:20:55.185 slat (usec): min=4, max=8035, avg=20.93, stdev=196.17 00:20:55.185 clat (msec): min=15, max=159, avg=76.51, stdev=23.72 00:20:55.185 lat (msec): min=15, max=159, avg=76.54, stdev=23.73 00:20:55.185 clat percentiles (msec): 00:20:55.185 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 58], 00:20:55.185 | 30.00th=[ 62], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 75], 00:20:55.185 | 70.00th=[ 86], 80.00th=[ 97], 90.00th=[ 109], 95.00th=[ 117], 00:20:55.185 | 99.00th=[ 132], 99.50th=[ 161], 99.90th=[ 161], 99.95th=[ 161], 00:20:55.185 | 99.99th=[ 161] 00:20:55.185 bw ( KiB/s): min= 528, max= 1080, per=4.12%, avg=832.05, stdev=161.19, samples=20 00:20:55.185 iops : min= 132, max= 270, avg=208.00, stdev=40.28, samples=20 00:20:55.185 lat (msec) : 20=0.48%, 50=16.84%, 100=64.26%, 250=18.42% 00:20:55.185 cpu : usr=40.64%, sys=2.61%, ctx=1071, majf=0, minf=9 00:20:55.185 IO depths : 1=0.1%, 2=1.4%, 4=5.7%, 8=77.6%, 16=15.3%, 32=0.0%, >=64=0.0% 00:20:55.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.185 complete : 0=0.0%, 4=88.6%, 8=10.2%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.185 issued rwts: total=2090,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:55.185 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:55.185 filename0: (groupid=0, jobs=1): err= 0: pid=87130: Sun Dec 8 05:19:42 2024 00:20:55.185 read: IOPS=212, BW=849KiB/s (869kB/s)(8520KiB/10039msec) 00:20:55.185 slat (usec): min=7, max=8028, avg=24.16, stdev=229.74 00:20:55.185 clat (msec): min=14, max=151, avg=75.20, stdev=21.74 00:20:55.185 lat (msec): min=14, max=151, avg=75.23, stdev=21.74 00:20:55.185 clat percentiles (msec): 00:20:55.185 | 1.00th=[ 34], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 56], 00:20:55.185 | 30.00th=[ 63], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 78], 00:20:55.185 | 70.00th=[ 84], 80.00th=[ 97], 90.00th=[ 108], 95.00th=[ 115], 00:20:55.185 | 99.00th=[ 121], 99.50th=[ 123], 99.90th=[ 128], 99.95th=[ 128], 00:20:55.185 | 99.99th=[ 153] 00:20:55.185 bw ( KiB/s): min= 640, max= 1144, per=4.19%, avg=848.00, stdev=135.04, samples=20 00:20:55.185 iops : min= 160, max= 286, avg=212.00, stdev=33.76, samples=20 00:20:55.185 lat (msec) : 20=0.66%, 50=12.96%, 100=70.14%, 250=16.24% 00:20:55.185 cpu : usr=40.67%, sys=2.22%, ctx=1253, majf=0, minf=9 00:20:55.185 IO depths : 1=0.1%, 2=0.4%, 4=1.6%, 8=81.7%, 16=16.2%, 32=0.0%, >=64=0.0% 00:20:55.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.185 complete : 0=0.0%, 4=87.7%, 8=12.0%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.185 issued rwts: total=2130,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:55.185 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:55.185 filename0: (groupid=0, jobs=1): err= 0: pid=87131: Sun Dec 8 05:19:42 2024 00:20:55.185 read: IOPS=209, BW=837KiB/s (857kB/s)(8396KiB/10028msec) 00:20:55.185 slat (usec): min=8, max=12030, avg=25.29, stdev=315.13 00:20:55.185 clat (msec): min=26, max=157, avg=76.29, stdev=22.23 00:20:55.185 lat (msec): min=26, max=157, avg=76.32, stdev=22.24 00:20:55.185 clat percentiles (msec): 00:20:55.185 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 58], 00:20:55.185 | 30.00th=[ 62], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 80], 00:20:55.185 | 70.00th=[ 86], 80.00th=[ 96], 90.00th=[ 109], 95.00th=[ 117], 00:20:55.185 | 99.00th=[ 122], 99.50th=[ 129], 99.90th=[ 155], 99.95th=[ 157], 00:20:55.185 | 99.99th=[ 159] 00:20:55.185 bw ( KiB/s): min= 632, max= 1032, per=4.12%, avg=833.10, stdev=137.30, samples=20 00:20:55.185 iops : min= 158, max= 258, avg=208.25, stdev=34.29, samples=20 00:20:55.185 lat (msec) : 50=15.96%, 100=67.03%, 250=17.01% 00:20:55.185 cpu : usr=34.38%, sys=1.83%, ctx=999, majf=0, minf=9 00:20:55.185 IO depths : 1=0.1%, 2=0.3%, 4=1.1%, 8=82.2%, 16=16.3%, 32=0.0%, >=64=0.0% 00:20:55.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.185 complete : 0=0.0%, 4=87.6%, 8=12.2%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.185 issued rwts: total=2099,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:55.185 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:55.185 filename0: (groupid=0, jobs=1): err= 0: pid=87132: Sun Dec 8 05:19:42 2024 00:20:55.185 read: IOPS=221, BW=885KiB/s (906kB/s)(8852KiB/10001msec) 00:20:55.185 slat (usec): min=3, max=8030, avg=28.25, stdev=256.58 00:20:55.185 clat (usec): min=849, max=152230, avg=72192.06, stdev=23552.75 00:20:55.185 lat (usec): min=859, max=152241, avg=72220.32, stdev=23550.64 00:20:55.185 clat percentiles (msec): 00:20:55.185 | 1.00th=[ 8], 5.00th=[ 41], 10.00th=[ 47], 20.00th=[ 50], 00:20:55.185 | 30.00th=[ 59], 40.00th=[ 66], 50.00th=[ 72], 60.00th=[ 75], 00:20:55.185 | 70.00th=[ 81], 80.00th=[ 95], 90.00th=[ 107], 95.00th=[ 116], 00:20:55.185 | 99.00th=[ 122], 99.50th=[ 122], 99.90th=[ 130], 99.95th=[ 153], 00:20:55.185 | 99.99th=[ 153] 00:20:55.185 bw ( KiB/s): min= 664, max= 1024, per=4.24%, avg=858.95, stdev=133.90, samples=19 00:20:55.185 iops : min= 166, max= 256, avg=214.74, stdev=33.47, samples=19 00:20:55.185 lat (usec) : 1000=0.18% 00:20:55.185 lat (msec) : 2=0.14%, 4=0.54%, 10=0.45%, 20=0.54%, 50=18.80% 00:20:55.185 lat (msec) : 100=64.98%, 250=14.37% 00:20:55.185 cpu : usr=42.61%, sys=2.56%, ctx=1288, majf=0, minf=9 00:20:55.185 IO depths : 1=0.1%, 2=0.4%, 4=1.4%, 8=82.7%, 16=15.5%, 32=0.0%, >=64=0.0% 00:20:55.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.185 complete : 0=0.0%, 4=87.1%, 8=12.6%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.185 issued rwts: total=2213,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:55.185 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:55.185 filename1: (groupid=0, jobs=1): err= 0: pid=87133: Sun Dec 8 05:19:42 2024 00:20:55.185 read: IOPS=217, BW=871KiB/s (892kB/s)(8728KiB/10021msec) 00:20:55.185 slat (usec): min=3, max=8041, avg=22.48, stdev=242.77 00:20:55.185 clat (msec): min=26, max=131, avg=73.32, stdev=21.38 00:20:55.185 lat (msec): min=26, max=131, avg=73.34, stdev=21.37 00:20:55.185 clat percentiles (msec): 00:20:55.185 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 50], 00:20:55.185 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 73], 00:20:55.185 | 70.00th=[ 83], 80.00th=[ 95], 90.00th=[ 108], 95.00th=[ 114], 00:20:55.185 | 99.00th=[ 122], 99.50th=[ 124], 99.90th=[ 132], 99.95th=[ 132], 00:20:55.185 | 99.99th=[ 132] 00:20:55.185 bw ( KiB/s): min= 688, max= 1072, per=4.29%, avg=868.70, stdev=136.23, samples=20 00:20:55.185 iops : min= 172, max= 268, avg=217.15, stdev=34.04, samples=20 00:20:55.185 lat (msec) : 50=20.67%, 100=64.71%, 250=14.62% 00:20:55.185 cpu : usr=37.15%, sys=2.04%, ctx=1151, majf=0, minf=9 00:20:55.185 IO depths : 1=0.1%, 2=0.2%, 4=0.9%, 8=83.0%, 16=15.8%, 32=0.0%, >=64=0.0% 00:20:55.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.185 complete : 0=0.0%, 4=87.1%, 8=12.7%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.185 issued rwts: total=2182,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:55.185 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:55.185 filename1: (groupid=0, jobs=1): err= 0: pid=87134: Sun Dec 8 05:19:42 2024 00:20:55.185 read: IOPS=207, BW=831KiB/s (851kB/s)(8352KiB/10049msec) 00:20:55.185 slat (usec): min=7, max=8030, avg=24.78, stdev=236.26 00:20:55.185 clat (msec): min=2, max=158, avg=76.79, stdev=24.20 00:20:55.185 lat (msec): min=2, max=158, avg=76.82, stdev=24.21 00:20:55.185 clat percentiles (msec): 00:20:55.185 | 1.00th=[ 6], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 61], 00:20:55.185 | 30.00th=[ 67], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 81], 00:20:55.185 | 70.00th=[ 85], 80.00th=[ 101], 90.00th=[ 109], 95.00th=[ 116], 00:20:55.185 | 99.00th=[ 126], 99.50th=[ 146], 99.90th=[ 146], 99.95th=[ 153], 00:20:55.185 | 99.99th=[ 159] 00:20:55.185 bw ( KiB/s): min= 632, max= 1396, per=4.09%, avg=827.95, stdev=178.30, samples=20 00:20:55.185 iops : min= 158, max= 349, avg=206.95, stdev=44.60, samples=20 00:20:55.185 lat (msec) : 4=0.77%, 10=1.53%, 20=0.67%, 50=9.77%, 100=67.53% 00:20:55.185 lat (msec) : 250=19.73% 00:20:55.185 cpu : usr=38.35%, sys=2.26%, ctx=1277, majf=0, minf=9 00:20:55.185 IO depths : 1=0.1%, 2=1.0%, 4=3.6%, 8=79.1%, 16=16.3%, 32=0.0%, >=64=0.0% 00:20:55.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.185 complete : 0=0.0%, 4=88.6%, 8=10.6%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.185 issued rwts: total=2088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:55.185 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:55.185 filename1: (groupid=0, jobs=1): err= 0: pid=87135: Sun Dec 8 05:19:42 2024 00:20:55.185 read: IOPS=206, BW=828KiB/s (847kB/s)(8304KiB/10035msec) 00:20:55.185 slat (nsec): min=4528, max=43485, avg=15404.13, stdev=5123.45 00:20:55.185 clat (msec): min=31, max=156, avg=77.20, stdev=22.88 00:20:55.185 lat (msec): min=31, max=156, avg=77.21, stdev=22.88 00:20:55.185 clat percentiles (msec): 00:20:55.185 | 1.00th=[ 40], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 56], 00:20:55.185 | 30.00th=[ 64], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 81], 00:20:55.185 | 70.00th=[ 92], 80.00th=[ 99], 90.00th=[ 108], 95.00th=[ 121], 00:20:55.185 | 99.00th=[ 122], 99.50th=[ 132], 99.90th=[ 157], 99.95th=[ 157], 00:20:55.185 | 99.99th=[ 157] 00:20:55.185 bw ( KiB/s): min= 608, max= 1066, per=4.07%, avg=823.70, stdev=160.36, samples=20 00:20:55.185 iops : min= 152, max= 266, avg=205.90, stdev=40.05, samples=20 00:20:55.185 lat (msec) : 50=15.27%, 100=66.23%, 250=18.50% 00:20:55.186 cpu : usr=38.74%, sys=1.91%, ctx=1121, majf=0, minf=9 00:20:55.186 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=82.5%, 16=16.8%, 32=0.0%, >=64=0.0% 00:20:55.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.186 complete : 0=0.0%, 4=87.8%, 8=12.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.186 issued rwts: total=2076,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:55.186 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:55.186 filename1: (groupid=0, jobs=1): err= 0: pid=87136: Sun Dec 8 05:19:42 2024 00:20:55.186 read: IOPS=213, BW=853KiB/s (873kB/s)(8540KiB/10017msec) 00:20:55.186 slat (usec): min=5, max=8028, avg=24.97, stdev=247.17 00:20:55.186 clat (msec): min=16, max=143, avg=74.91, stdev=21.93 00:20:55.186 lat (msec): min=16, max=143, avg=74.93, stdev=21.92 00:20:55.186 clat percentiles (msec): 00:20:55.186 | 1.00th=[ 35], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 54], 00:20:55.186 | 30.00th=[ 63], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 79], 00:20:55.186 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 116], 00:20:55.186 | 99.00th=[ 122], 99.50th=[ 124], 99.90th=[ 129], 99.95th=[ 129], 00:20:55.186 | 99.99th=[ 144] 00:20:55.186 bw ( KiB/s): min= 640, max= 1084, per=4.20%, avg=849.40, stdev=133.86, samples=20 00:20:55.186 iops : min= 160, max= 271, avg=212.35, stdev=33.47, samples=20 00:20:55.186 lat (msec) : 20=0.28%, 50=14.33%, 100=69.37%, 250=16.02% 00:20:55.186 cpu : usr=40.66%, sys=2.34%, ctx=1340, majf=0, minf=9 00:20:55.186 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=82.5%, 16=16.0%, 32=0.0%, >=64=0.0% 00:20:55.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.186 complete : 0=0.0%, 4=87.3%, 8=12.4%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.186 issued rwts: total=2135,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:55.186 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:55.186 filename1: (groupid=0, jobs=1): err= 0: pid=87137: Sun Dec 8 05:19:42 2024 00:20:55.186 read: IOPS=207, BW=831KiB/s (851kB/s)(8340KiB/10034msec) 00:20:55.186 slat (nsec): min=4560, max=47793, avg=14650.49, stdev=5262.42 00:20:55.186 clat (msec): min=31, max=158, avg=76.85, stdev=21.61 00:20:55.186 lat (msec): min=31, max=158, avg=76.87, stdev=21.61 00:20:55.186 clat percentiles (msec): 00:20:55.186 | 1.00th=[ 38], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 59], 00:20:55.186 | 30.00th=[ 65], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 79], 00:20:55.186 | 70.00th=[ 86], 80.00th=[ 99], 90.00th=[ 110], 95.00th=[ 115], 00:20:55.186 | 99.00th=[ 122], 99.50th=[ 123], 99.90th=[ 155], 99.95th=[ 157], 00:20:55.186 | 99.99th=[ 159] 00:20:55.186 bw ( KiB/s): min= 608, max= 1031, per=4.10%, avg=829.55, stdev=137.29, samples=20 00:20:55.186 iops : min= 152, max= 257, avg=207.35, stdev=34.27, samples=20 00:20:55.186 lat (msec) : 50=13.24%, 100=68.30%, 250=18.47% 00:20:55.186 cpu : usr=38.52%, sys=2.24%, ctx=1127, majf=0, minf=9 00:20:55.186 IO depths : 1=0.1%, 2=0.5%, 4=1.9%, 8=81.2%, 16=16.5%, 32=0.0%, >=64=0.0% 00:20:55.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.186 complete : 0=0.0%, 4=87.9%, 8=11.7%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.186 issued rwts: total=2085,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:55.186 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:55.186 filename1: (groupid=0, jobs=1): err= 0: pid=87138: Sun Dec 8 05:19:42 2024 00:20:55.186 read: IOPS=205, BW=822KiB/s (841kB/s)(8240KiB/10030msec) 00:20:55.186 slat (usec): min=4, max=8028, avg=24.40, stdev=247.26 00:20:55.186 clat (msec): min=35, max=154, avg=77.72, stdev=21.87 00:20:55.186 lat (msec): min=35, max=154, avg=77.75, stdev=21.87 00:20:55.186 clat percentiles (msec): 00:20:55.186 | 1.00th=[ 41], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 58], 00:20:55.186 | 30.00th=[ 67], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 81], 00:20:55.186 | 70.00th=[ 89], 80.00th=[ 101], 90.00th=[ 110], 95.00th=[ 116], 00:20:55.186 | 99.00th=[ 125], 99.50th=[ 127], 99.90th=[ 146], 99.95th=[ 150], 00:20:55.186 | 99.99th=[ 155] 00:20:55.186 bw ( KiB/s): min= 616, max= 1024, per=4.05%, avg=819.50, stdev=134.95, samples=20 00:20:55.186 iops : min= 154, max= 256, avg=204.85, stdev=33.70, samples=20 00:20:55.186 lat (msec) : 50=14.08%, 100=65.97%, 250=19.95% 00:20:55.186 cpu : usr=37.90%, sys=2.14%, ctx=1287, majf=0, minf=9 00:20:55.186 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=82.2%, 16=16.7%, 32=0.0%, >=64=0.0% 00:20:55.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.186 complete : 0=0.0%, 4=87.8%, 8=12.1%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.186 issued rwts: total=2060,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:55.186 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:55.186 filename1: (groupid=0, jobs=1): err= 0: pid=87139: Sun Dec 8 05:19:42 2024 00:20:55.186 read: IOPS=207, BW=828KiB/s (848kB/s)(8316KiB/10040msec) 00:20:55.186 slat (usec): min=7, max=8045, avg=23.54, stdev=257.81 00:20:55.186 clat (msec): min=14, max=155, avg=77.07, stdev=21.68 00:20:55.186 lat (msec): min=14, max=155, avg=77.10, stdev=21.68 00:20:55.186 clat percentiles (msec): 00:20:55.186 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 61], 00:20:55.186 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 81], 00:20:55.186 | 70.00th=[ 85], 80.00th=[ 97], 90.00th=[ 109], 95.00th=[ 117], 00:20:55.186 | 99.00th=[ 122], 99.50th=[ 124], 99.90th=[ 136], 99.95th=[ 157], 00:20:55.186 | 99.99th=[ 157] 00:20:55.186 bw ( KiB/s): min= 632, max= 1064, per=4.09%, avg=827.60, stdev=126.90, samples=20 00:20:55.186 iops : min= 158, max= 266, avg=206.90, stdev=31.73, samples=20 00:20:55.186 lat (msec) : 20=0.67%, 50=12.41%, 100=69.50%, 250=17.41% 00:20:55.186 cpu : usr=34.33%, sys=1.92%, ctx=972, majf=0, minf=9 00:20:55.186 IO depths : 1=0.1%, 2=0.4%, 4=1.5%, 8=81.4%, 16=16.5%, 32=0.0%, >=64=0.0% 00:20:55.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.186 complete : 0=0.0%, 4=87.9%, 8=11.8%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.186 issued rwts: total=2079,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:55.186 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:55.186 filename1: (groupid=0, jobs=1): err= 0: pid=87140: Sun Dec 8 05:19:42 2024 00:20:55.186 read: IOPS=216, BW=866KiB/s (886kB/s)(8668KiB/10013msec) 00:20:55.186 slat (nsec): min=4623, max=38455, avg=15267.94, stdev=4921.85 00:20:55.186 clat (msec): min=31, max=131, avg=73.81, stdev=21.31 00:20:55.186 lat (msec): min=31, max=131, avg=73.82, stdev=21.31 00:20:55.186 clat percentiles (msec): 00:20:55.186 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 50], 00:20:55.186 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 73], 00:20:55.186 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 112], 00:20:55.186 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 124], 99.95th=[ 124], 00:20:55.186 | 99.99th=[ 132] 00:20:55.186 bw ( KiB/s): min= 664, max= 1056, per=4.21%, avg=851.79, stdev=127.68, samples=19 00:20:55.186 iops : min= 166, max= 264, avg=212.95, stdev=31.92, samples=19 00:20:55.186 lat (msec) : 50=20.95%, 100=64.88%, 250=14.17% 00:20:55.186 cpu : usr=31.33%, sys=1.83%, ctx=844, majf=0, minf=9 00:20:55.186 IO depths : 1=0.1%, 2=0.5%, 4=1.8%, 8=82.0%, 16=15.6%, 32=0.0%, >=64=0.0% 00:20:55.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.186 complete : 0=0.0%, 4=87.3%, 8=12.3%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.186 issued rwts: total=2167,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:55.186 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:55.186 filename2: (groupid=0, jobs=1): err= 0: pid=87141: Sun Dec 8 05:19:42 2024 00:20:55.186 read: IOPS=209, BW=838KiB/s (858kB/s)(8408KiB/10037msec) 00:20:55.186 slat (usec): min=3, max=8029, avg=36.69, stdev=427.49 00:20:55.186 clat (msec): min=4, max=157, avg=76.13, stdev=24.79 00:20:55.186 lat (msec): min=4, max=157, avg=76.17, stdev=24.80 00:20:55.186 clat percentiles (msec): 00:20:55.186 | 1.00th=[ 5], 5.00th=[ 39], 10.00th=[ 48], 20.00th=[ 61], 00:20:55.186 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 82], 00:20:55.186 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 109], 95.00th=[ 118], 00:20:55.186 | 99.00th=[ 121], 99.50th=[ 123], 99.90th=[ 144], 99.95th=[ 148], 00:20:55.186 | 99.99th=[ 159] 00:20:55.186 bw ( KiB/s): min= 608, max= 1675, per=4.14%, avg=836.65, stdev=229.49, samples=20 00:20:55.186 iops : min= 152, max= 418, avg=209.10, stdev=57.21, samples=20 00:20:55.186 lat (msec) : 10=3.04%, 20=0.67%, 50=10.94%, 100=68.93%, 250=16.41% 00:20:55.186 cpu : usr=35.37%, sys=2.21%, ctx=924, majf=0, minf=0 00:20:55.186 IO depths : 1=0.1%, 2=0.5%, 4=1.7%, 8=80.9%, 16=16.9%, 32=0.0%, >=64=0.0% 00:20:55.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.186 complete : 0=0.0%, 4=88.3%, 8=11.3%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.186 issued rwts: total=2102,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:55.186 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:55.186 filename2: (groupid=0, jobs=1): err= 0: pid=87142: Sun Dec 8 05:19:42 2024 00:20:55.186 read: IOPS=209, BW=839KiB/s (859kB/s)(8420KiB/10036msec) 00:20:55.186 slat (usec): min=7, max=8039, avg=25.27, stdev=265.16 00:20:55.186 clat (msec): min=14, max=157, avg=76.12, stdev=22.59 00:20:55.186 lat (msec): min=14, max=157, avg=76.15, stdev=22.60 00:20:55.186 clat percentiles (msec): 00:20:55.186 | 1.00th=[ 34], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 56], 00:20:55.186 | 30.00th=[ 64], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 80], 00:20:55.186 | 70.00th=[ 85], 80.00th=[ 97], 90.00th=[ 108], 95.00th=[ 117], 00:20:55.186 | 99.00th=[ 122], 99.50th=[ 124], 99.90th=[ 157], 99.95th=[ 157], 00:20:55.186 | 99.99th=[ 157] 00:20:55.186 bw ( KiB/s): min= 624, max= 1184, per=4.13%, avg=835.60, stdev=150.40, samples=20 00:20:55.186 iops : min= 156, max= 296, avg=208.90, stdev=37.60, samples=20 00:20:55.186 lat (msec) : 20=0.67%, 50=14.49%, 100=67.08%, 250=17.77% 00:20:55.186 cpu : usr=37.78%, sys=2.16%, ctx=1200, majf=0, minf=9 00:20:55.186 IO depths : 1=0.1%, 2=0.2%, 4=0.9%, 8=82.2%, 16=16.6%, 32=0.0%, >=64=0.0% 00:20:55.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.186 complete : 0=0.0%, 4=87.7%, 8=12.1%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.186 issued rwts: total=2105,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:55.186 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:55.186 filename2: (groupid=0, jobs=1): err= 0: pid=87143: Sun Dec 8 05:19:42 2024 00:20:55.186 read: IOPS=209, BW=837KiB/s (857kB/s)(8404KiB/10037msec) 00:20:55.186 slat (usec): min=4, max=10030, avg=22.81, stdev=279.94 00:20:55.186 clat (msec): min=4, max=153, avg=76.23, stdev=24.48 00:20:55.187 lat (msec): min=4, max=153, avg=76.26, stdev=24.49 00:20:55.187 clat percentiles (msec): 00:20:55.187 | 1.00th=[ 5], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 61], 00:20:55.187 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 81], 00:20:55.187 | 70.00th=[ 86], 80.00th=[ 99], 90.00th=[ 109], 95.00th=[ 116], 00:20:55.187 | 99.00th=[ 125], 99.50th=[ 129], 99.90th=[ 144], 99.95th=[ 144], 00:20:55.187 | 99.99th=[ 153] 00:20:55.187 bw ( KiB/s): min= 632, max= 1552, per=4.14%, avg=836.10, stdev=201.54, samples=20 00:20:55.187 iops : min= 158, max= 388, avg=209.00, stdev=50.37, samples=20 00:20:55.187 lat (msec) : 10=3.05%, 20=0.67%, 50=9.66%, 100=68.49%, 250=18.13% 00:20:55.187 cpu : usr=33.59%, sys=1.76%, ctx=1013, majf=0, minf=0 00:20:55.187 IO depths : 1=0.2%, 2=0.6%, 4=1.9%, 8=80.7%, 16=16.7%, 32=0.0%, >=64=0.0% 00:20:55.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.187 complete : 0=0.0%, 4=88.3%, 8=11.3%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.187 issued rwts: total=2101,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:55.187 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:55.187 filename2: (groupid=0, jobs=1): err= 0: pid=87144: Sun Dec 8 05:19:42 2024 00:20:55.187 read: IOPS=206, BW=825KiB/s (844kB/s)(8276KiB/10036msec) 00:20:55.187 slat (usec): min=6, max=8036, avg=23.05, stdev=249.28 00:20:55.187 clat (msec): min=33, max=155, avg=77.38, stdev=22.89 00:20:55.187 lat (msec): min=33, max=155, avg=77.41, stdev=22.90 00:20:55.187 clat percentiles (msec): 00:20:55.187 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 56], 00:20:55.187 | 30.00th=[ 64], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 81], 00:20:55.187 | 70.00th=[ 91], 80.00th=[ 100], 90.00th=[ 109], 95.00th=[ 121], 00:20:55.187 | 99.00th=[ 121], 99.50th=[ 125], 99.90th=[ 157], 99.95th=[ 157], 00:20:55.187 | 99.99th=[ 157] 00:20:55.187 bw ( KiB/s): min= 608, max= 1058, per=4.08%, avg=824.10, stdev=157.35, samples=20 00:20:55.187 iops : min= 152, max= 264, avg=206.00, stdev=39.30, samples=20 00:20:55.187 lat (msec) : 50=14.98%, 100=65.49%, 250=19.53% 00:20:55.187 cpu : usr=33.95%, sys=1.69%, ctx=962, majf=0, minf=9 00:20:55.187 IO depths : 1=0.1%, 2=0.3%, 4=1.3%, 8=81.6%, 16=16.7%, 32=0.0%, >=64=0.0% 00:20:55.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.187 complete : 0=0.0%, 4=88.0%, 8=11.8%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.187 issued rwts: total=2069,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:55.187 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:55.187 filename2: (groupid=0, jobs=1): err= 0: pid=87145: Sun Dec 8 05:19:42 2024 00:20:55.187 read: IOPS=206, BW=824KiB/s (844kB/s)(8272KiB/10035msec) 00:20:55.187 slat (usec): min=7, max=6675, avg=24.94, stdev=232.03 00:20:55.187 clat (msec): min=35, max=152, avg=77.46, stdev=20.97 00:20:55.187 lat (msec): min=35, max=153, avg=77.48, stdev=20.98 00:20:55.187 clat percentiles (msec): 00:20:55.187 | 1.00th=[ 39], 5.00th=[ 47], 10.00th=[ 50], 20.00th=[ 60], 00:20:55.187 | 30.00th=[ 67], 40.00th=[ 71], 50.00th=[ 74], 60.00th=[ 79], 00:20:55.187 | 70.00th=[ 86], 80.00th=[ 99], 90.00th=[ 108], 95.00th=[ 115], 00:20:55.187 | 99.00th=[ 122], 99.50th=[ 124], 99.90th=[ 138], 99.95th=[ 153], 00:20:55.187 | 99.99th=[ 153] 00:20:55.187 bw ( KiB/s): min= 656, max= 992, per=4.07%, avg=823.25, stdev=125.33, samples=20 00:20:55.187 iops : min= 164, max= 248, avg=205.80, stdev=31.31, samples=20 00:20:55.187 lat (msec) : 50=10.20%, 100=71.37%, 250=18.42% 00:20:55.187 cpu : usr=39.42%, sys=2.28%, ctx=1358, majf=0, minf=9 00:20:55.187 IO depths : 1=0.1%, 2=0.3%, 4=0.9%, 8=82.1%, 16=16.6%, 32=0.0%, >=64=0.0% 00:20:55.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.187 complete : 0=0.0%, 4=87.8%, 8=12.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.187 issued rwts: total=2068,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:55.187 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:55.187 filename2: (groupid=0, jobs=1): err= 0: pid=87146: Sun Dec 8 05:19:42 2024 00:20:55.187 read: IOPS=217, BW=869KiB/s (890kB/s)(8700KiB/10010msec) 00:20:55.187 slat (usec): min=4, max=8032, avg=25.37, stdev=253.57 00:20:55.187 clat (msec): min=16, max=131, avg=73.50, stdev=21.43 00:20:55.187 lat (msec): min=16, max=131, avg=73.53, stdev=21.43 00:20:55.187 clat percentiles (msec): 00:20:55.187 | 1.00th=[ 33], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 53], 00:20:55.187 | 30.00th=[ 62], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 75], 00:20:55.187 | 70.00th=[ 82], 80.00th=[ 94], 90.00th=[ 107], 95.00th=[ 113], 00:20:55.187 | 99.00th=[ 121], 99.50th=[ 123], 99.90th=[ 132], 99.95th=[ 132], 00:20:55.187 | 99.99th=[ 132] 00:20:55.187 bw ( KiB/s): min= 664, max= 1016, per=4.20%, avg=850.95, stdev=122.38, samples=19 00:20:55.187 iops : min= 166, max= 254, avg=212.74, stdev=30.60, samples=19 00:20:55.187 lat (msec) : 20=0.46%, 50=15.63%, 100=68.92%, 250=14.99% 00:20:55.187 cpu : usr=41.00%, sys=2.36%, ctx=1417, majf=0, minf=9 00:20:55.187 IO depths : 1=0.1%, 2=0.4%, 4=1.6%, 8=82.3%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:55.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.187 complete : 0=0.0%, 4=87.2%, 8=12.5%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.187 issued rwts: total=2175,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:55.187 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:55.187 filename2: (groupid=0, jobs=1): err= 0: pid=87147: Sun Dec 8 05:19:42 2024 00:20:55.187 read: IOPS=208, BW=834KiB/s (854kB/s)(8360KiB/10021msec) 00:20:55.187 slat (usec): min=5, max=8030, avg=22.73, stdev=247.93 00:20:55.187 clat (msec): min=28, max=155, avg=76.59, stdev=21.94 00:20:55.187 lat (msec): min=28, max=155, avg=76.61, stdev=21.93 00:20:55.187 clat percentiles (msec): 00:20:55.187 | 1.00th=[ 37], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 59], 00:20:55.187 | 30.00th=[ 63], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 81], 00:20:55.187 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 118], 00:20:55.187 | 99.00th=[ 121], 99.50th=[ 125], 99.90th=[ 157], 99.95th=[ 157], 00:20:55.187 | 99.99th=[ 157] 00:20:55.187 bw ( KiB/s): min= 624, max= 1048, per=4.11%, avg=831.90, stdev=138.38, samples=20 00:20:55.187 iops : min= 156, max= 262, avg=207.95, stdev=34.58, samples=20 00:20:55.187 lat (msec) : 50=16.32%, 100=66.41%, 250=17.27% 00:20:55.187 cpu : usr=31.52%, sys=1.62%, ctx=842, majf=0, minf=9 00:20:55.187 IO depths : 1=0.1%, 2=0.2%, 4=0.9%, 8=82.3%, 16=16.5%, 32=0.0%, >=64=0.0% 00:20:55.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.187 complete : 0=0.0%, 4=87.6%, 8=12.2%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.187 issued rwts: total=2090,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:55.187 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:55.187 filename2: (groupid=0, jobs=1): err= 0: pid=87148: Sun Dec 8 05:19:42 2024 00:20:55.187 read: IOPS=211, BW=847KiB/s (867kB/s)(8480KiB/10015msec) 00:20:55.187 slat (usec): min=3, max=12028, avg=32.58, stdev=356.10 00:20:55.187 clat (msec): min=17, max=155, avg=75.43, stdev=22.63 00:20:55.187 lat (msec): min=17, max=155, avg=75.47, stdev=22.63 00:20:55.187 clat percentiles (msec): 00:20:55.187 | 1.00th=[ 33], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 53], 00:20:55.187 | 30.00th=[ 63], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 80], 00:20:55.187 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 117], 00:20:55.187 | 99.00th=[ 121], 99.50th=[ 123], 99.90th=[ 132], 99.95th=[ 132], 00:20:55.187 | 99.99th=[ 157] 00:20:55.187 bw ( KiB/s): min= 592, max= 1128, per=4.16%, avg=841.60, stdev=148.66, samples=20 00:20:55.187 iops : min= 148, max= 282, avg=210.40, stdev=37.17, samples=20 00:20:55.187 lat (msec) : 20=0.47%, 50=16.56%, 100=66.56%, 250=16.42% 00:20:55.187 cpu : usr=37.89%, sys=2.32%, ctx=1252, majf=0, minf=9 00:20:55.187 IO depths : 1=0.1%, 2=0.4%, 4=1.6%, 8=81.9%, 16=16.1%, 32=0.0%, >=64=0.0% 00:20:55.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.187 complete : 0=0.0%, 4=87.6%, 8=12.1%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.187 issued rwts: total=2120,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:55.187 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:55.187 00:20:55.187 Run status group 0 (all jobs): 00:20:55.187 READ: bw=19.7MiB/s (20.7MB/s), 822KiB/s-885KiB/s (841kB/s-906kB/s), io=198MiB (208MB), run=10001-10049msec 00:20:55.187 05:19:42 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:20:55.187 05:19:42 -- target/dif.sh@43 -- # local sub 00:20:55.187 05:19:42 -- target/dif.sh@45 -- # for sub in "$@" 00:20:55.187 05:19:42 -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:55.187 05:19:42 -- target/dif.sh@36 -- # local sub_id=0 00:20:55.187 05:19:42 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:55.187 05:19:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.187 05:19:42 -- common/autotest_common.sh@10 -- # set +x 00:20:55.187 05:19:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.187 05:19:42 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:55.187 05:19:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.187 05:19:42 -- common/autotest_common.sh@10 -- # set +x 00:20:55.187 05:19:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.187 05:19:42 -- target/dif.sh@45 -- # for sub in "$@" 00:20:55.187 05:19:42 -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:55.187 05:19:42 -- target/dif.sh@36 -- # local sub_id=1 00:20:55.187 05:19:42 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:55.187 05:19:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.187 05:19:42 -- common/autotest_common.sh@10 -- # set +x 00:20:55.187 05:19:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.187 05:19:42 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:55.187 05:19:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.187 05:19:42 -- common/autotest_common.sh@10 -- # set +x 00:20:55.187 05:19:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.187 05:19:42 -- target/dif.sh@45 -- # for sub in "$@" 00:20:55.187 05:19:42 -- target/dif.sh@46 -- # destroy_subsystem 2 00:20:55.187 05:19:42 -- target/dif.sh@36 -- # local sub_id=2 00:20:55.187 05:19:42 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:55.187 05:19:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.187 05:19:42 -- common/autotest_common.sh@10 -- # set +x 00:20:55.187 05:19:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.187 05:19:42 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:20:55.187 05:19:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.187 05:19:42 -- common/autotest_common.sh@10 -- # set +x 00:20:55.188 05:19:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.188 05:19:42 -- target/dif.sh@115 -- # NULL_DIF=1 00:20:55.188 05:19:42 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:20:55.188 05:19:42 -- target/dif.sh@115 -- # numjobs=2 00:20:55.188 05:19:42 -- target/dif.sh@115 -- # iodepth=8 00:20:55.188 05:19:42 -- target/dif.sh@115 -- # runtime=5 00:20:55.188 05:19:42 -- target/dif.sh@115 -- # files=1 00:20:55.188 05:19:42 -- target/dif.sh@117 -- # create_subsystems 0 1 00:20:55.188 05:19:42 -- target/dif.sh@28 -- # local sub 00:20:55.188 05:19:42 -- target/dif.sh@30 -- # for sub in "$@" 00:20:55.188 05:19:42 -- target/dif.sh@31 -- # create_subsystem 0 00:20:55.188 05:19:42 -- target/dif.sh@18 -- # local sub_id=0 00:20:55.188 05:19:42 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:55.188 05:19:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.188 05:19:42 -- common/autotest_common.sh@10 -- # set +x 00:20:55.188 bdev_null0 00:20:55.188 05:19:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.188 05:19:43 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:55.188 05:19:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.188 05:19:43 -- common/autotest_common.sh@10 -- # set +x 00:20:55.188 05:19:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.188 05:19:43 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:55.188 05:19:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.188 05:19:43 -- common/autotest_common.sh@10 -- # set +x 00:20:55.188 05:19:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.188 05:19:43 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:55.188 05:19:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.188 05:19:43 -- common/autotest_common.sh@10 -- # set +x 00:20:55.188 [2024-12-08 05:19:43.024805] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:55.188 05:19:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.188 05:19:43 -- target/dif.sh@30 -- # for sub in "$@" 00:20:55.188 05:19:43 -- target/dif.sh@31 -- # create_subsystem 1 00:20:55.188 05:19:43 -- target/dif.sh@18 -- # local sub_id=1 00:20:55.188 05:19:43 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:55.188 05:19:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.188 05:19:43 -- common/autotest_common.sh@10 -- # set +x 00:20:55.188 bdev_null1 00:20:55.188 05:19:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.188 05:19:43 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:55.188 05:19:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.188 05:19:43 -- common/autotest_common.sh@10 -- # set +x 00:20:55.188 05:19:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.188 05:19:43 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:55.188 05:19:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.188 05:19:43 -- common/autotest_common.sh@10 -- # set +x 00:20:55.188 05:19:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.188 05:19:43 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:55.188 05:19:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.188 05:19:43 -- common/autotest_common.sh@10 -- # set +x 00:20:55.188 05:19:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.188 05:19:43 -- target/dif.sh@118 -- # fio /dev/fd/62 00:20:55.188 05:19:43 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:20:55.188 05:19:43 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:55.188 05:19:43 -- nvmf/common.sh@520 -- # config=() 00:20:55.188 05:19:43 -- nvmf/common.sh@520 -- # local subsystem config 00:20:55.188 05:19:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:55.188 05:19:43 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:55.188 05:19:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:55.188 { 00:20:55.188 "params": { 00:20:55.188 "name": "Nvme$subsystem", 00:20:55.188 "trtype": "$TEST_TRANSPORT", 00:20:55.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:55.188 "adrfam": "ipv4", 00:20:55.188 "trsvcid": "$NVMF_PORT", 00:20:55.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:55.188 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:55.188 "hdgst": ${hdgst:-false}, 00:20:55.188 "ddgst": ${ddgst:-false} 00:20:55.188 }, 00:20:55.188 "method": "bdev_nvme_attach_controller" 00:20:55.188 } 00:20:55.188 EOF 00:20:55.188 )") 00:20:55.188 05:19:43 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:55.188 05:19:43 -- target/dif.sh@82 -- # gen_fio_conf 00:20:55.188 05:19:43 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:20:55.188 05:19:43 -- target/dif.sh@54 -- # local file 00:20:55.188 05:19:43 -- target/dif.sh@56 -- # cat 00:20:55.188 05:19:43 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:55.188 05:19:43 -- common/autotest_common.sh@1328 -- # local sanitizers 00:20:55.188 05:19:43 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:55.188 05:19:43 -- common/autotest_common.sh@1330 -- # shift 00:20:55.188 05:19:43 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:20:55.188 05:19:43 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:55.188 05:19:43 -- nvmf/common.sh@542 -- # cat 00:20:55.188 05:19:43 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:55.188 05:19:43 -- common/autotest_common.sh@1334 -- # grep libasan 00:20:55.188 05:19:43 -- target/dif.sh@72 -- # (( file = 1 )) 00:20:55.188 05:19:43 -- target/dif.sh@72 -- # (( file <= files )) 00:20:55.188 05:19:43 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:55.188 05:19:43 -- target/dif.sh@73 -- # cat 00:20:55.188 05:19:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:55.188 05:19:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:55.188 { 00:20:55.188 "params": { 00:20:55.188 "name": "Nvme$subsystem", 00:20:55.188 "trtype": "$TEST_TRANSPORT", 00:20:55.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:55.188 "adrfam": "ipv4", 00:20:55.188 "trsvcid": "$NVMF_PORT", 00:20:55.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:55.188 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:55.188 "hdgst": ${hdgst:-false}, 00:20:55.188 "ddgst": ${ddgst:-false} 00:20:55.188 }, 00:20:55.188 "method": "bdev_nvme_attach_controller" 00:20:55.188 } 00:20:55.188 EOF 00:20:55.188 )") 00:20:55.188 05:19:43 -- target/dif.sh@72 -- # (( file++ )) 00:20:55.188 05:19:43 -- nvmf/common.sh@542 -- # cat 00:20:55.188 05:19:43 -- target/dif.sh@72 -- # (( file <= files )) 00:20:55.188 05:19:43 -- nvmf/common.sh@544 -- # jq . 00:20:55.188 05:19:43 -- nvmf/common.sh@545 -- # IFS=, 00:20:55.188 05:19:43 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:55.188 "params": { 00:20:55.188 "name": "Nvme0", 00:20:55.188 "trtype": "tcp", 00:20:55.188 "traddr": "10.0.0.2", 00:20:55.188 "adrfam": "ipv4", 00:20:55.188 "trsvcid": "4420", 00:20:55.188 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:55.188 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:55.188 "hdgst": false, 00:20:55.188 "ddgst": false 00:20:55.188 }, 00:20:55.188 "method": "bdev_nvme_attach_controller" 00:20:55.188 },{ 00:20:55.188 "params": { 00:20:55.188 "name": "Nvme1", 00:20:55.188 "trtype": "tcp", 00:20:55.188 "traddr": "10.0.0.2", 00:20:55.188 "adrfam": "ipv4", 00:20:55.188 "trsvcid": "4420", 00:20:55.188 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:55.188 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:55.188 "hdgst": false, 00:20:55.188 "ddgst": false 00:20:55.188 }, 00:20:55.188 "method": "bdev_nvme_attach_controller" 00:20:55.188 }' 00:20:55.188 05:19:43 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:55.188 05:19:43 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:55.188 05:19:43 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:55.188 05:19:43 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:55.188 05:19:43 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:20:55.188 05:19:43 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:55.188 05:19:43 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:55.188 05:19:43 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:55.188 05:19:43 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:55.188 05:19:43 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:55.188 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:55.188 ... 00:20:55.188 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:55.188 ... 00:20:55.188 fio-3.35 00:20:55.188 Starting 4 threads 00:20:55.188 [2024-12-08 05:19:43.637631] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:20:55.188 [2024-12-08 05:19:43.637713] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:20:59.426 00:20:59.426 filename0: (groupid=0, jobs=1): err= 0: pid=87290: Sun Dec 8 05:19:48 2024 00:20:59.426 read: IOPS=1899, BW=14.8MiB/s (15.6MB/s)(74.2MiB/5002msec) 00:20:59.426 slat (nsec): min=7797, max=50783, avg=15589.80, stdev=3658.34 00:20:59.426 clat (usec): min=1330, max=11536, avg=4160.27, stdev=900.09 00:20:59.426 lat (usec): min=1344, max=11555, avg=4175.86, stdev=899.51 00:20:59.426 clat percentiles (usec): 00:20:59.426 | 1.00th=[ 1860], 5.00th=[ 2376], 10.00th=[ 2638], 20.00th=[ 3785], 00:20:59.426 | 30.00th=[ 3851], 40.00th=[ 3916], 50.00th=[ 4424], 60.00th=[ 4490], 00:20:59.426 | 70.00th=[ 4621], 80.00th=[ 4817], 90.00th=[ 4948], 95.00th=[ 5342], 00:20:59.426 | 99.00th=[ 6325], 99.50th=[ 6521], 99.90th=[ 9241], 99.95th=[10290], 00:20:59.426 | 99.99th=[11600] 00:20:59.426 bw ( KiB/s): min=13440, max=17152, per=23.65%, avg=15191.40, stdev=1443.54, samples=10 00:20:59.426 iops : min= 1680, max= 2144, avg=1898.90, stdev=180.48, samples=10 00:20:59.426 lat (msec) : 2=1.41%, 4=40.94%, 10=57.57%, 20=0.07% 00:20:59.426 cpu : usr=92.66%, sys=6.44%, ctx=52, majf=0, minf=9 00:20:59.426 IO depths : 1=0.1%, 2=14.4%, 4=56.4%, 8=29.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:59.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.426 complete : 0=0.0%, 4=94.4%, 8=5.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.426 issued rwts: total=9501,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:59.426 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:59.426 filename0: (groupid=0, jobs=1): err= 0: pid=87291: Sun Dec 8 05:19:48 2024 00:20:59.426 read: IOPS=2112, BW=16.5MiB/s (17.3MB/s)(82.6MiB/5001msec) 00:20:59.426 slat (nsec): min=7713, max=46100, avg=14594.48, stdev=4348.12 00:20:59.426 clat (usec): min=736, max=11552, avg=3744.06, stdev=1071.48 00:20:59.426 lat (usec): min=744, max=11570, avg=3758.65, stdev=1071.23 00:20:59.426 clat percentiles (usec): 00:20:59.426 | 1.00th=[ 1582], 5.00th=[ 2073], 10.00th=[ 2180], 20.00th=[ 2606], 00:20:59.426 | 30.00th=[ 2999], 40.00th=[ 3785], 50.00th=[ 3851], 60.00th=[ 4080], 00:20:59.426 | 70.00th=[ 4359], 80.00th=[ 4621], 90.00th=[ 4883], 95.00th=[ 5145], 00:20:59.426 | 99.00th=[ 6390], 99.50th=[ 6980], 99.90th=[ 9241], 99.95th=[10421], 00:20:59.426 | 99.99th=[10945] 00:20:59.426 bw ( KiB/s): min=15200, max=19200, per=26.59%, avg=17075.56, stdev=1523.25, samples=9 00:20:59.426 iops : min= 1900, max= 2400, avg=2134.44, stdev=190.41, samples=9 00:20:59.426 lat (usec) : 750=0.02%, 1000=0.04% 00:20:59.426 lat (msec) : 2=3.04%, 4=55.46%, 10=41.38%, 20=0.07% 00:20:59.426 cpu : usr=91.82%, sys=7.26%, ctx=4, majf=0, minf=9 00:20:59.426 IO depths : 1=0.1%, 2=6.3%, 4=60.8%, 8=32.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:59.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.426 complete : 0=0.0%, 4=97.6%, 8=2.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.426 issued rwts: total=10567,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:59.426 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:59.426 filename1: (groupid=0, jobs=1): err= 0: pid=87292: Sun Dec 8 05:19:48 2024 00:20:59.426 read: IOPS=1972, BW=15.4MiB/s (16.2MB/s)(77.1MiB/5002msec) 00:20:59.426 slat (nsec): min=4351, max=49841, avg=14953.60, stdev=4043.43 00:20:59.426 clat (usec): min=1290, max=11502, avg=4009.67, stdev=952.61 00:20:59.426 lat (usec): min=1304, max=11515, avg=4024.63, stdev=952.68 00:20:59.426 clat percentiles (usec): 00:20:59.426 | 1.00th=[ 1696], 5.00th=[ 2311], 10.00th=[ 2573], 20.00th=[ 3097], 00:20:59.426 | 30.00th=[ 3785], 40.00th=[ 3851], 50.00th=[ 4080], 60.00th=[ 4490], 00:20:59.426 | 70.00th=[ 4555], 80.00th=[ 4686], 90.00th=[ 4883], 95.00th=[ 5276], 00:20:59.426 | 99.00th=[ 6325], 99.50th=[ 6521], 99.90th=[ 9372], 99.95th=[10290], 00:20:59.426 | 99.99th=[11469] 00:20:59.426 bw ( KiB/s): min=13568, max=17248, per=24.32%, avg=15619.56, stdev=1282.42, samples=9 00:20:59.426 iops : min= 1696, max= 2156, avg=1952.44, stdev=160.30, samples=9 00:20:59.426 lat (msec) : 2=1.63%, 4=46.64%, 10=51.65%, 20=0.07% 00:20:59.426 cpu : usr=92.18%, sys=6.92%, ctx=55, majf=0, minf=9 00:20:59.426 IO depths : 1=0.1%, 2=11.4%, 4=58.0%, 8=30.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:59.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.426 complete : 0=0.0%, 4=95.7%, 8=4.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.426 issued rwts: total=9864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:59.426 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:59.426 filename1: (groupid=0, jobs=1): err= 0: pid=87293: Sun Dec 8 05:19:48 2024 00:20:59.426 read: IOPS=2044, BW=16.0MiB/s (16.7MB/s)(79.9MiB/5003msec) 00:20:59.426 slat (nsec): min=3588, max=47320, avg=12245.33, stdev=4460.45 00:20:59.426 clat (usec): min=1087, max=10988, avg=3875.44, stdev=1040.75 00:20:59.426 lat (usec): min=1095, max=10996, avg=3887.68, stdev=1040.76 00:20:59.426 clat percentiles (usec): 00:20:59.426 | 1.00th=[ 1582], 5.00th=[ 2114], 10.00th=[ 2212], 20.00th=[ 2769], 00:20:59.426 | 30.00th=[ 3785], 40.00th=[ 3851], 50.00th=[ 3916], 60.00th=[ 4228], 00:20:59.426 | 70.00th=[ 4424], 80.00th=[ 4817], 90.00th=[ 4883], 95.00th=[ 5145], 00:20:59.427 | 99.00th=[ 6325], 99.50th=[ 6456], 99.90th=[ 8455], 99.95th=[ 9241], 00:20:59.427 | 99.99th=[10421] 00:20:59.427 bw ( KiB/s): min=13328, max=19104, per=25.48%, avg=16360.00, stdev=1796.47, samples=10 00:20:59.427 iops : min= 1666, max= 2388, avg=2045.00, stdev=224.56, samples=10 00:20:59.427 lat (msec) : 2=2.76%, 4=50.40%, 10=46.82%, 20=0.03% 00:20:59.427 cpu : usr=92.18%, sys=6.92%, ctx=11, majf=0, minf=9 00:20:59.427 IO depths : 1=0.1%, 2=8.8%, 4=59.5%, 8=31.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:59.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.427 complete : 0=0.0%, 4=96.7%, 8=3.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.427 issued rwts: total=10229,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:59.427 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:59.427 00:20:59.427 Run status group 0 (all jobs): 00:20:59.427 READ: bw=62.7MiB/s (65.8MB/s), 14.8MiB/s-16.5MiB/s (15.6MB/s-17.3MB/s), io=314MiB (329MB), run=5001-5003msec 00:20:59.427 05:19:48 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:20:59.427 05:19:48 -- target/dif.sh@43 -- # local sub 00:20:59.427 05:19:48 -- target/dif.sh@45 -- # for sub in "$@" 00:20:59.427 05:19:48 -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:59.427 05:19:48 -- target/dif.sh@36 -- # local sub_id=0 00:20:59.427 05:19:48 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:59.427 05:19:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.427 05:19:48 -- common/autotest_common.sh@10 -- # set +x 00:20:59.427 05:19:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.427 05:19:48 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:59.427 05:19:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.427 05:19:48 -- common/autotest_common.sh@10 -- # set +x 00:20:59.427 05:19:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.427 05:19:48 -- target/dif.sh@45 -- # for sub in "$@" 00:20:59.427 05:19:48 -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:59.427 05:19:48 -- target/dif.sh@36 -- # local sub_id=1 00:20:59.427 05:19:48 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:59.427 05:19:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.427 05:19:48 -- common/autotest_common.sh@10 -- # set +x 00:20:59.427 05:19:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.427 05:19:48 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:59.427 05:19:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.427 05:19:48 -- common/autotest_common.sh@10 -- # set +x 00:20:59.427 05:19:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.427 00:20:59.427 real 0m22.974s 00:20:59.427 user 2m3.350s 00:20:59.427 sys 0m8.356s 00:20:59.427 05:19:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:59.427 ************************************ 00:20:59.427 05:19:48 -- common/autotest_common.sh@10 -- # set +x 00:20:59.427 END TEST fio_dif_rand_params 00:20:59.427 ************************************ 00:20:59.427 05:19:48 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:20:59.427 05:19:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:59.427 05:19:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:59.427 05:19:48 -- common/autotest_common.sh@10 -- # set +x 00:20:59.427 ************************************ 00:20:59.427 START TEST fio_dif_digest 00:20:59.427 ************************************ 00:20:59.427 05:19:48 -- common/autotest_common.sh@1114 -- # fio_dif_digest 00:20:59.427 05:19:48 -- target/dif.sh@123 -- # local NULL_DIF 00:20:59.427 05:19:48 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:20:59.427 05:19:48 -- target/dif.sh@125 -- # local hdgst ddgst 00:20:59.427 05:19:48 -- target/dif.sh@127 -- # NULL_DIF=3 00:20:59.427 05:19:48 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:20:59.427 05:19:48 -- target/dif.sh@127 -- # numjobs=3 00:20:59.427 05:19:48 -- target/dif.sh@127 -- # iodepth=3 00:20:59.427 05:19:48 -- target/dif.sh@127 -- # runtime=10 00:20:59.427 05:19:49 -- target/dif.sh@128 -- # hdgst=true 00:20:59.427 05:19:49 -- target/dif.sh@128 -- # ddgst=true 00:20:59.427 05:19:49 -- target/dif.sh@130 -- # create_subsystems 0 00:20:59.427 05:19:49 -- target/dif.sh@28 -- # local sub 00:20:59.427 05:19:49 -- target/dif.sh@30 -- # for sub in "$@" 00:20:59.427 05:19:49 -- target/dif.sh@31 -- # create_subsystem 0 00:20:59.427 05:19:49 -- target/dif.sh@18 -- # local sub_id=0 00:20:59.427 05:19:49 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:59.427 05:19:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.427 05:19:49 -- common/autotest_common.sh@10 -- # set +x 00:20:59.427 bdev_null0 00:20:59.427 05:19:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.427 05:19:49 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:59.427 05:19:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.427 05:19:49 -- common/autotest_common.sh@10 -- # set +x 00:20:59.427 05:19:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.427 05:19:49 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:59.427 05:19:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.427 05:19:49 -- common/autotest_common.sh@10 -- # set +x 00:20:59.427 05:19:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.427 05:19:49 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:59.427 05:19:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.427 05:19:49 -- common/autotest_common.sh@10 -- # set +x 00:20:59.427 [2024-12-08 05:19:49.030108] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:59.427 05:19:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.427 05:19:49 -- target/dif.sh@131 -- # fio /dev/fd/62 00:20:59.427 05:19:49 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:20:59.427 05:19:49 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:59.427 05:19:49 -- nvmf/common.sh@520 -- # config=() 00:20:59.427 05:19:49 -- nvmf/common.sh@520 -- # local subsystem config 00:20:59.427 05:19:49 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:59.427 05:19:49 -- target/dif.sh@82 -- # gen_fio_conf 00:20:59.427 05:19:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:59.427 05:19:49 -- target/dif.sh@54 -- # local file 00:20:59.427 05:19:49 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:59.427 05:19:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:59.427 { 00:20:59.427 "params": { 00:20:59.427 "name": "Nvme$subsystem", 00:20:59.427 "trtype": "$TEST_TRANSPORT", 00:20:59.427 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.427 "adrfam": "ipv4", 00:20:59.427 "trsvcid": "$NVMF_PORT", 00:20:59.427 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.427 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.427 "hdgst": ${hdgst:-false}, 00:20:59.427 "ddgst": ${ddgst:-false} 00:20:59.427 }, 00:20:59.427 "method": "bdev_nvme_attach_controller" 00:20:59.427 } 00:20:59.427 EOF 00:20:59.427 )") 00:20:59.427 05:19:49 -- target/dif.sh@56 -- # cat 00:20:59.427 05:19:49 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:20:59.427 05:19:49 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:59.427 05:19:49 -- common/autotest_common.sh@1328 -- # local sanitizers 00:20:59.427 05:19:49 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:59.427 05:19:49 -- common/autotest_common.sh@1330 -- # shift 00:20:59.427 05:19:49 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:20:59.427 05:19:49 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:59.427 05:19:49 -- nvmf/common.sh@542 -- # cat 00:20:59.427 05:19:49 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:59.427 05:19:49 -- target/dif.sh@72 -- # (( file = 1 )) 00:20:59.427 05:19:49 -- target/dif.sh@72 -- # (( file <= files )) 00:20:59.427 05:19:49 -- common/autotest_common.sh@1334 -- # grep libasan 00:20:59.427 05:19:49 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:59.427 05:19:49 -- nvmf/common.sh@544 -- # jq . 00:20:59.427 05:19:49 -- nvmf/common.sh@545 -- # IFS=, 00:20:59.427 05:19:49 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:59.427 "params": { 00:20:59.427 "name": "Nvme0", 00:20:59.427 "trtype": "tcp", 00:20:59.427 "traddr": "10.0.0.2", 00:20:59.427 "adrfam": "ipv4", 00:20:59.427 "trsvcid": "4420", 00:20:59.427 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:59.427 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:59.427 "hdgst": true, 00:20:59.427 "ddgst": true 00:20:59.427 }, 00:20:59.427 "method": "bdev_nvme_attach_controller" 00:20:59.427 }' 00:20:59.427 05:19:49 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:59.427 05:19:49 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:59.427 05:19:49 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:59.427 05:19:49 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:59.427 05:19:49 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:59.427 05:19:49 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:20:59.427 05:19:49 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:59.427 05:19:49 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:59.427 05:19:49 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:59.427 05:19:49 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:59.686 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:59.686 ... 00:20:59.686 fio-3.35 00:20:59.686 Starting 3 threads 00:20:59.944 [2024-12-08 05:19:49.561174] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:20:59.944 [2024-12-08 05:19:49.561244] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:21:09.946 00:21:09.947 filename0: (groupid=0, jobs=1): err= 0: pid=87399: Sun Dec 8 05:19:59 2024 00:21:09.947 read: IOPS=221, BW=27.7MiB/s (29.0MB/s)(277MiB/10014msec) 00:21:09.947 slat (nsec): min=7060, max=41925, avg=16070.73, stdev=4902.04 00:21:09.947 clat (usec): min=13111, max=23134, avg=13513.62, stdev=841.00 00:21:09.947 lat (usec): min=13121, max=23148, avg=13529.69, stdev=841.00 00:21:09.947 clat percentiles (usec): 00:21:09.947 | 1.00th=[13173], 5.00th=[13173], 10.00th=[13173], 20.00th=[13173], 00:21:09.947 | 30.00th=[13173], 40.00th=[13304], 50.00th=[13304], 60.00th=[13304], 00:21:09.947 | 70.00th=[13304], 80.00th=[13435], 90.00th=[13829], 95.00th=[15533], 00:21:09.947 | 99.00th=[16909], 99.50th=[17433], 99.90th=[23200], 99.95th=[23200], 00:21:09.947 | 99.99th=[23200] 00:21:09.947 bw ( KiB/s): min=23808, max=29184, per=33.33%, avg=28339.20, stdev=1166.06, samples=20 00:21:09.947 iops : min= 186, max= 228, avg=221.40, stdev= 9.11, samples=20 00:21:09.947 lat (msec) : 20=99.86%, 50=0.14% 00:21:09.947 cpu : usr=92.04%, sys=7.35%, ctx=8, majf=0, minf=9 00:21:09.947 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:09.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.947 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.947 issued rwts: total=2217,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.947 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:09.947 filename0: (groupid=0, jobs=1): err= 0: pid=87400: Sun Dec 8 05:19:59 2024 00:21:09.947 read: IOPS=221, BW=27.7MiB/s (29.0MB/s)(277MiB/10012msec) 00:21:09.947 slat (nsec): min=8155, max=71024, avg=17051.45, stdev=4717.47 00:21:09.947 clat (usec): min=13114, max=23148, avg=13508.97, stdev=830.82 00:21:09.947 lat (usec): min=13122, max=23164, avg=13526.03, stdev=830.96 00:21:09.947 clat percentiles (usec): 00:21:09.947 | 1.00th=[13173], 5.00th=[13173], 10.00th=[13173], 20.00th=[13173], 00:21:09.947 | 30.00th=[13173], 40.00th=[13304], 50.00th=[13304], 60.00th=[13304], 00:21:09.947 | 70.00th=[13304], 80.00th=[13435], 90.00th=[13829], 95.00th=[15533], 00:21:09.947 | 99.00th=[16909], 99.50th=[17433], 99.90th=[23200], 99.95th=[23200], 00:21:09.947 | 99.99th=[23200] 00:21:09.947 bw ( KiB/s): min=23808, max=29184, per=33.33%, avg=28336.35, stdev=1192.26, samples=20 00:21:09.947 iops : min= 186, max= 228, avg=221.35, stdev= 9.31, samples=20 00:21:09.947 lat (msec) : 20=99.86%, 50=0.14% 00:21:09.947 cpu : usr=91.83%, sys=7.57%, ctx=18, majf=0, minf=9 00:21:09.947 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:09.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.947 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.947 issued rwts: total=2217,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.947 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:09.947 filename0: (groupid=0, jobs=1): err= 0: pid=87401: Sun Dec 8 05:19:59 2024 00:21:09.947 read: IOPS=221, BW=27.7MiB/s (29.0MB/s)(277MiB/10010msec) 00:21:09.947 slat (nsec): min=8125, max=53073, avg=17321.46, stdev=5100.84 00:21:09.947 clat (usec): min=13122, max=23143, avg=13505.19, stdev=823.74 00:21:09.947 lat (usec): min=13141, max=23161, avg=13522.51, stdev=823.73 00:21:09.947 clat percentiles (usec): 00:21:09.947 | 1.00th=[13173], 5.00th=[13173], 10.00th=[13173], 20.00th=[13173], 00:21:09.947 | 30.00th=[13173], 40.00th=[13304], 50.00th=[13304], 60.00th=[13304], 00:21:09.947 | 70.00th=[13304], 80.00th=[13304], 90.00th=[13829], 95.00th=[15533], 00:21:09.947 | 99.00th=[16909], 99.50th=[17433], 99.90th=[23200], 99.95th=[23200], 00:21:09.947 | 99.99th=[23200] 00:21:09.947 bw ( KiB/s): min=23808, max=29184, per=33.33%, avg=28339.15, stdev=1192.52, samples=20 00:21:09.947 iops : min= 186, max= 228, avg=221.35, stdev= 9.31, samples=20 00:21:09.947 lat (msec) : 20=99.86%, 50=0.14% 00:21:09.947 cpu : usr=92.26%, sys=7.12%, ctx=6, majf=0, minf=9 00:21:09.947 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:09.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.947 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.947 issued rwts: total=2217,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.947 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:09.947 00:21:09.947 Run status group 0 (all jobs): 00:21:09.947 READ: bw=83.0MiB/s (87.1MB/s), 27.7MiB/s-27.7MiB/s (29.0MB/s-29.0MB/s), io=831MiB (872MB), run=10010-10014msec 00:21:10.205 05:19:59 -- target/dif.sh@132 -- # destroy_subsystems 0 00:21:10.205 05:19:59 -- target/dif.sh@43 -- # local sub 00:21:10.205 05:19:59 -- target/dif.sh@45 -- # for sub in "$@" 00:21:10.205 05:19:59 -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:10.205 05:19:59 -- target/dif.sh@36 -- # local sub_id=0 00:21:10.205 05:19:59 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:10.205 05:19:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.205 05:19:59 -- common/autotest_common.sh@10 -- # set +x 00:21:10.205 05:19:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.205 05:19:59 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:10.205 05:19:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.205 05:19:59 -- common/autotest_common.sh@10 -- # set +x 00:21:10.205 05:19:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.205 00:21:10.205 real 0m10.848s 00:21:10.205 user 0m28.171s 00:21:10.205 sys 0m2.419s 00:21:10.205 05:19:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:10.205 05:19:59 -- common/autotest_common.sh@10 -- # set +x 00:21:10.205 ************************************ 00:21:10.205 END TEST fio_dif_digest 00:21:10.205 ************************************ 00:21:10.205 05:19:59 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:21:10.205 05:19:59 -- target/dif.sh@147 -- # nvmftestfini 00:21:10.205 05:19:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:10.205 05:19:59 -- nvmf/common.sh@116 -- # sync 00:21:10.205 05:19:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:10.205 05:19:59 -- nvmf/common.sh@119 -- # set +e 00:21:10.205 05:19:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:10.205 05:19:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:10.205 rmmod nvme_tcp 00:21:10.205 rmmod nvme_fabrics 00:21:10.205 rmmod nvme_keyring 00:21:10.205 05:19:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:10.205 05:19:59 -- nvmf/common.sh@123 -- # set -e 00:21:10.205 05:19:59 -- nvmf/common.sh@124 -- # return 0 00:21:10.205 05:19:59 -- nvmf/common.sh@477 -- # '[' -n 86656 ']' 00:21:10.205 05:19:59 -- nvmf/common.sh@478 -- # killprocess 86656 00:21:10.205 05:19:59 -- common/autotest_common.sh@936 -- # '[' -z 86656 ']' 00:21:10.205 05:19:59 -- common/autotest_common.sh@940 -- # kill -0 86656 00:21:10.205 05:19:59 -- common/autotest_common.sh@941 -- # uname 00:21:10.205 05:19:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:10.463 05:19:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86656 00:21:10.463 05:20:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:10.463 05:20:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:10.463 killing process with pid 86656 00:21:10.463 05:20:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86656' 00:21:10.463 05:20:00 -- common/autotest_common.sh@955 -- # kill 86656 00:21:10.463 05:20:00 -- common/autotest_common.sh@960 -- # wait 86656 00:21:10.463 05:20:00 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:21:10.463 05:20:00 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:10.747 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:10.747 Waiting for block devices as requested 00:21:11.006 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:21:11.006 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:21:11.006 05:20:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:11.006 05:20:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:11.006 05:20:00 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:11.006 05:20:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:11.006 05:20:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:11.006 05:20:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:11.006 05:20:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:11.006 05:20:00 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:11.006 00:21:11.006 real 0m57.938s 00:21:11.006 user 3m45.251s 00:21:11.006 sys 0m19.170s 00:21:11.006 05:20:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:11.006 05:20:00 -- common/autotest_common.sh@10 -- # set +x 00:21:11.006 ************************************ 00:21:11.006 END TEST nvmf_dif 00:21:11.006 ************************************ 00:21:11.006 05:20:00 -- spdk/autotest.sh@288 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:21:11.006 05:20:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:11.006 05:20:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:11.006 05:20:00 -- common/autotest_common.sh@10 -- # set +x 00:21:11.006 ************************************ 00:21:11.006 START TEST nvmf_abort_qd_sizes 00:21:11.006 ************************************ 00:21:11.006 05:20:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:21:11.265 * Looking for test storage... 00:21:11.265 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:11.265 05:20:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:11.265 05:20:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:11.265 05:20:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:11.265 05:20:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:11.265 05:20:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:11.265 05:20:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:11.265 05:20:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:11.265 05:20:00 -- scripts/common.sh@335 -- # IFS=.-: 00:21:11.265 05:20:00 -- scripts/common.sh@335 -- # read -ra ver1 00:21:11.265 05:20:00 -- scripts/common.sh@336 -- # IFS=.-: 00:21:11.265 05:20:00 -- scripts/common.sh@336 -- # read -ra ver2 00:21:11.265 05:20:00 -- scripts/common.sh@337 -- # local 'op=<' 00:21:11.265 05:20:00 -- scripts/common.sh@339 -- # ver1_l=2 00:21:11.265 05:20:00 -- scripts/common.sh@340 -- # ver2_l=1 00:21:11.265 05:20:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:11.265 05:20:00 -- scripts/common.sh@343 -- # case "$op" in 00:21:11.265 05:20:00 -- scripts/common.sh@344 -- # : 1 00:21:11.265 05:20:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:11.265 05:20:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:11.265 05:20:00 -- scripts/common.sh@364 -- # decimal 1 00:21:11.265 05:20:00 -- scripts/common.sh@352 -- # local d=1 00:21:11.265 05:20:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:11.265 05:20:00 -- scripts/common.sh@354 -- # echo 1 00:21:11.265 05:20:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:11.265 05:20:00 -- scripts/common.sh@365 -- # decimal 2 00:21:11.265 05:20:00 -- scripts/common.sh@352 -- # local d=2 00:21:11.265 05:20:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:11.265 05:20:00 -- scripts/common.sh@354 -- # echo 2 00:21:11.265 05:20:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:11.265 05:20:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:11.265 05:20:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:11.265 05:20:00 -- scripts/common.sh@367 -- # return 0 00:21:11.265 05:20:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:11.265 05:20:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:11.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:11.265 --rc genhtml_branch_coverage=1 00:21:11.265 --rc genhtml_function_coverage=1 00:21:11.265 --rc genhtml_legend=1 00:21:11.265 --rc geninfo_all_blocks=1 00:21:11.265 --rc geninfo_unexecuted_blocks=1 00:21:11.265 00:21:11.265 ' 00:21:11.265 05:20:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:11.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:11.265 --rc genhtml_branch_coverage=1 00:21:11.265 --rc genhtml_function_coverage=1 00:21:11.265 --rc genhtml_legend=1 00:21:11.265 --rc geninfo_all_blocks=1 00:21:11.265 --rc geninfo_unexecuted_blocks=1 00:21:11.265 00:21:11.265 ' 00:21:11.265 05:20:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:11.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:11.265 --rc genhtml_branch_coverage=1 00:21:11.265 --rc genhtml_function_coverage=1 00:21:11.265 --rc genhtml_legend=1 00:21:11.265 --rc geninfo_all_blocks=1 00:21:11.265 --rc geninfo_unexecuted_blocks=1 00:21:11.265 00:21:11.265 ' 00:21:11.265 05:20:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:11.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:11.265 --rc genhtml_branch_coverage=1 00:21:11.265 --rc genhtml_function_coverage=1 00:21:11.265 --rc genhtml_legend=1 00:21:11.265 --rc geninfo_all_blocks=1 00:21:11.265 --rc geninfo_unexecuted_blocks=1 00:21:11.265 00:21:11.265 ' 00:21:11.265 05:20:00 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:11.265 05:20:00 -- nvmf/common.sh@7 -- # uname -s 00:21:11.265 05:20:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:11.265 05:20:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:11.265 05:20:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:11.265 05:20:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:11.265 05:20:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:11.265 05:20:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:11.265 05:20:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:11.265 05:20:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:11.265 05:20:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:11.265 05:20:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:11.265 05:20:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:21:11.265 05:20:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 00:21:11.265 05:20:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:11.265 05:20:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:11.265 05:20:00 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:11.265 05:20:00 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:11.265 05:20:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:11.265 05:20:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:11.265 05:20:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:11.265 05:20:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.265 05:20:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.265 05:20:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.265 05:20:00 -- paths/export.sh@5 -- # export PATH 00:21:11.265 05:20:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.265 05:20:00 -- nvmf/common.sh@46 -- # : 0 00:21:11.265 05:20:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:11.265 05:20:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:11.265 05:20:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:11.265 05:20:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:11.265 05:20:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:11.265 05:20:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:11.265 05:20:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:11.265 05:20:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:11.265 05:20:00 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:21:11.265 05:20:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:11.265 05:20:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:11.265 05:20:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:11.265 05:20:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:11.265 05:20:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:11.265 05:20:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:11.265 05:20:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:11.265 05:20:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:11.265 05:20:00 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:11.265 05:20:00 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:11.265 05:20:00 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:11.265 05:20:00 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:11.265 05:20:00 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:11.265 05:20:00 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:11.265 05:20:00 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:11.265 05:20:00 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:11.265 05:20:00 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:11.265 05:20:00 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:11.265 05:20:00 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:11.265 05:20:00 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:11.265 05:20:00 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:11.265 05:20:00 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:11.265 05:20:00 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:11.265 05:20:00 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:11.265 05:20:00 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:11.265 05:20:00 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:11.265 05:20:00 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:11.265 05:20:00 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:11.265 Cannot find device "nvmf_tgt_br" 00:21:11.266 05:20:01 -- nvmf/common.sh@154 -- # true 00:21:11.266 05:20:01 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:11.266 Cannot find device "nvmf_tgt_br2" 00:21:11.266 05:20:01 -- nvmf/common.sh@155 -- # true 00:21:11.266 05:20:01 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:11.266 05:20:01 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:11.266 Cannot find device "nvmf_tgt_br" 00:21:11.266 05:20:01 -- nvmf/common.sh@157 -- # true 00:21:11.266 05:20:01 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:11.266 Cannot find device "nvmf_tgt_br2" 00:21:11.266 05:20:01 -- nvmf/common.sh@158 -- # true 00:21:11.266 05:20:01 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:11.525 05:20:01 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:11.525 05:20:01 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:11.525 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:11.525 05:20:01 -- nvmf/common.sh@161 -- # true 00:21:11.525 05:20:01 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:11.525 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:11.525 05:20:01 -- nvmf/common.sh@162 -- # true 00:21:11.525 05:20:01 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:11.525 05:20:01 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:11.525 05:20:01 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:11.525 05:20:01 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:11.525 05:20:01 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:11.525 05:20:01 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:11.525 05:20:01 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:11.525 05:20:01 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:11.525 05:20:01 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:11.525 05:20:01 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:11.525 05:20:01 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:11.525 05:20:01 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:11.525 05:20:01 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:11.525 05:20:01 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:11.525 05:20:01 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:11.525 05:20:01 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:11.525 05:20:01 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:11.525 05:20:01 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:11.525 05:20:01 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:11.525 05:20:01 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:11.525 05:20:01 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:11.525 05:20:01 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:11.525 05:20:01 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:11.525 05:20:01 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:11.525 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:11.525 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:21:11.525 00:21:11.525 --- 10.0.0.2 ping statistics --- 00:21:11.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.525 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:21:11.525 05:20:01 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:11.525 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:11.525 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:21:11.525 00:21:11.525 --- 10.0.0.3 ping statistics --- 00:21:11.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.525 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:21:11.525 05:20:01 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:11.525 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:11.525 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:21:11.525 00:21:11.525 --- 10.0.0.1 ping statistics --- 00:21:11.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.525 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:21:11.525 05:20:01 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:11.525 05:20:01 -- nvmf/common.sh@421 -- # return 0 00:21:11.525 05:20:01 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:21:11.525 05:20:01 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:12.095 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:12.353 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:21:12.353 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:21:12.353 05:20:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:12.353 05:20:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:12.353 05:20:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:12.353 05:20:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:12.353 05:20:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:12.353 05:20:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:12.353 05:20:02 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:21:12.353 05:20:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:12.353 05:20:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:12.353 05:20:02 -- common/autotest_common.sh@10 -- # set +x 00:21:12.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:12.353 05:20:02 -- nvmf/common.sh@469 -- # nvmfpid=87995 00:21:12.353 05:20:02 -- nvmf/common.sh@470 -- # waitforlisten 87995 00:21:12.353 05:20:02 -- common/autotest_common.sh@829 -- # '[' -z 87995 ']' 00:21:12.353 05:20:02 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:21:12.353 05:20:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:12.353 05:20:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:12.353 05:20:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:12.353 05:20:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:12.353 05:20:02 -- common/autotest_common.sh@10 -- # set +x 00:21:12.612 [2024-12-08 05:20:02.172897] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:12.612 [2024-12-08 05:20:02.172994] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:12.612 [2024-12-08 05:20:02.314574] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:12.612 [2024-12-08 05:20:02.355344] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:12.612 [2024-12-08 05:20:02.355721] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:12.612 [2024-12-08 05:20:02.355924] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:12.612 [2024-12-08 05:20:02.356172] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:12.612 [2024-12-08 05:20:02.356572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:12.612 [2024-12-08 05:20:02.356699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:12.613 [2024-12-08 05:20:02.356771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:12.613 [2024-12-08 05:20:02.356776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:13.546 05:20:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:13.546 05:20:03 -- common/autotest_common.sh@862 -- # return 0 00:21:13.546 05:20:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:13.546 05:20:03 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:13.546 05:20:03 -- common/autotest_common.sh@10 -- # set +x 00:21:13.546 05:20:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:13.546 05:20:03 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:21:13.546 05:20:03 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:21:13.546 05:20:03 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:21:13.546 05:20:03 -- scripts/common.sh@311 -- # local bdf bdfs 00:21:13.546 05:20:03 -- scripts/common.sh@312 -- # local nvmes 00:21:13.546 05:20:03 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:21:13.546 05:20:03 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:21:13.546 05:20:03 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:21:13.546 05:20:03 -- scripts/common.sh@297 -- # local bdf= 00:21:13.546 05:20:03 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:21:13.546 05:20:03 -- scripts/common.sh@232 -- # local class 00:21:13.546 05:20:03 -- scripts/common.sh@233 -- # local subclass 00:21:13.546 05:20:03 -- scripts/common.sh@234 -- # local progif 00:21:13.546 05:20:03 -- scripts/common.sh@235 -- # printf %02x 1 00:21:13.546 05:20:03 -- scripts/common.sh@235 -- # class=01 00:21:13.546 05:20:03 -- scripts/common.sh@236 -- # printf %02x 8 00:21:13.546 05:20:03 -- scripts/common.sh@236 -- # subclass=08 00:21:13.546 05:20:03 -- scripts/common.sh@237 -- # printf %02x 2 00:21:13.546 05:20:03 -- scripts/common.sh@237 -- # progif=02 00:21:13.546 05:20:03 -- scripts/common.sh@239 -- # hash lspci 00:21:13.546 05:20:03 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:21:13.546 05:20:03 -- scripts/common.sh@242 -- # grep -i -- -p02 00:21:13.546 05:20:03 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:21:13.546 05:20:03 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:21:13.546 05:20:03 -- scripts/common.sh@244 -- # tr -d '"' 00:21:13.546 05:20:03 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:13.546 05:20:03 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:21:13.546 05:20:03 -- scripts/common.sh@15 -- # local i 00:21:13.546 05:20:03 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:21:13.547 05:20:03 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:21:13.547 05:20:03 -- scripts/common.sh@24 -- # return 0 00:21:13.547 05:20:03 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:21:13.547 05:20:03 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:13.547 05:20:03 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:21:13.547 05:20:03 -- scripts/common.sh@15 -- # local i 00:21:13.547 05:20:03 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:21:13.547 05:20:03 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:21:13.547 05:20:03 -- scripts/common.sh@24 -- # return 0 00:21:13.547 05:20:03 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:21:13.547 05:20:03 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:21:13.547 05:20:03 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:21:13.547 05:20:03 -- scripts/common.sh@322 -- # uname -s 00:21:13.547 05:20:03 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:21:13.547 05:20:03 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:21:13.547 05:20:03 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:21:13.547 05:20:03 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:21:13.547 05:20:03 -- scripts/common.sh@322 -- # uname -s 00:21:13.547 05:20:03 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:21:13.547 05:20:03 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:21:13.547 05:20:03 -- scripts/common.sh@327 -- # (( 2 )) 00:21:13.547 05:20:03 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:21:13.547 05:20:03 -- target/abort_qd_sizes.sh@79 -- # (( 2 > 0 )) 00:21:13.547 05:20:03 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:00:06.0 00:21:13.547 05:20:03 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:21:13.547 05:20:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:13.547 05:20:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:13.547 05:20:03 -- common/autotest_common.sh@10 -- # set +x 00:21:13.547 ************************************ 00:21:13.547 START TEST spdk_target_abort 00:21:13.547 ************************************ 00:21:13.547 05:20:03 -- common/autotest_common.sh@1114 -- # spdk_target 00:21:13.547 05:20:03 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:21:13.547 05:20:03 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:21:13.547 05:20:03 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:06.0 -b spdk_target 00:21:13.547 05:20:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.547 05:20:03 -- common/autotest_common.sh@10 -- # set +x 00:21:13.805 spdk_targetn1 00:21:13.805 05:20:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.805 05:20:03 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:13.805 05:20:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.805 05:20:03 -- common/autotest_common.sh@10 -- # set +x 00:21:13.805 [2024-12-08 05:20:03.351773] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:13.805 05:20:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.805 05:20:03 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:21:13.805 05:20:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.805 05:20:03 -- common/autotest_common.sh@10 -- # set +x 00:21:13.805 05:20:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.805 05:20:03 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:21:13.805 05:20:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.805 05:20:03 -- common/autotest_common.sh@10 -- # set +x 00:21:13.805 05:20:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.805 05:20:03 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:21:13.805 05:20:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.805 05:20:03 -- common/autotest_common.sh@10 -- # set +x 00:21:13.805 [2024-12-08 05:20:03.379979] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:13.805 05:20:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.805 05:20:03 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:21:13.805 05:20:03 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:13.805 05:20:03 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:13.805 05:20:03 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:21:13.805 05:20:03 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:13.805 05:20:03 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:21:13.805 05:20:03 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:13.805 05:20:03 -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:13.805 05:20:03 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:13.805 05:20:03 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:13.805 05:20:03 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:13.806 05:20:03 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:13.806 05:20:03 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:13.806 05:20:03 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:13.806 05:20:03 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:21:13.806 05:20:03 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:13.806 05:20:03 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:13.806 05:20:03 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:13.806 05:20:03 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:21:13.806 05:20:03 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:13.806 05:20:03 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:21:17.098 Initializing NVMe Controllers 00:21:17.098 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:21:17.098 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:21:17.098 Initialization complete. Launching workers. 00:21:17.098 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 11010, failed: 0 00:21:17.098 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1018, failed to submit 9992 00:21:17.098 success 741, unsuccess 277, failed 0 00:21:17.098 05:20:06 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:17.098 05:20:06 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:21:20.416 Initializing NVMe Controllers 00:21:20.416 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:21:20.416 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:21:20.416 Initialization complete. Launching workers. 00:21:20.416 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8904, failed: 0 00:21:20.416 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1161, failed to submit 7743 00:21:20.416 success 374, unsuccess 787, failed 0 00:21:20.416 05:20:09 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:20.416 05:20:09 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:21:23.696 Initializing NVMe Controllers 00:21:23.696 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:21:23.696 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:21:23.696 Initialization complete. Launching workers. 00:21:23.696 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 30719, failed: 0 00:21:23.696 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2182, failed to submit 28537 00:21:23.696 success 434, unsuccess 1748, failed 0 00:21:23.696 05:20:13 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:21:23.696 05:20:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.696 05:20:13 -- common/autotest_common.sh@10 -- # set +x 00:21:23.696 05:20:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.696 05:20:13 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:21:23.696 05:20:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.696 05:20:13 -- common/autotest_common.sh@10 -- # set +x 00:21:23.696 05:20:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.696 05:20:13 -- target/abort_qd_sizes.sh@62 -- # killprocess 87995 00:21:23.696 05:20:13 -- common/autotest_common.sh@936 -- # '[' -z 87995 ']' 00:21:23.696 05:20:13 -- common/autotest_common.sh@940 -- # kill -0 87995 00:21:23.696 05:20:13 -- common/autotest_common.sh@941 -- # uname 00:21:23.696 05:20:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:23.696 05:20:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87995 00:21:23.696 killing process with pid 87995 00:21:23.696 05:20:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:23.696 05:20:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:23.696 05:20:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87995' 00:21:23.696 05:20:13 -- common/autotest_common.sh@955 -- # kill 87995 00:21:23.696 05:20:13 -- common/autotest_common.sh@960 -- # wait 87995 00:21:23.954 ************************************ 00:21:23.954 END TEST spdk_target_abort 00:21:23.954 ************************************ 00:21:23.954 00:21:23.954 real 0m10.353s 00:21:23.954 user 0m42.511s 00:21:23.954 sys 0m2.048s 00:21:23.954 05:20:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:23.954 05:20:13 -- common/autotest_common.sh@10 -- # set +x 00:21:23.954 05:20:13 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:21:23.954 05:20:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:23.954 05:20:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:23.954 05:20:13 -- common/autotest_common.sh@10 -- # set +x 00:21:23.954 ************************************ 00:21:23.954 START TEST kernel_target_abort 00:21:23.954 ************************************ 00:21:23.954 05:20:13 -- common/autotest_common.sh@1114 -- # kernel_target 00:21:23.954 05:20:13 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:21:23.954 05:20:13 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:21:23.954 05:20:13 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:21:23.954 05:20:13 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:21:23.954 05:20:13 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:21:23.954 05:20:13 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:21:23.954 05:20:13 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:23.954 05:20:13 -- nvmf/common.sh@627 -- # local block nvme 00:21:23.954 05:20:13 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:21:23.954 05:20:13 -- nvmf/common.sh@630 -- # modprobe nvmet 00:21:23.954 05:20:13 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:23.954 05:20:13 -- nvmf/common.sh@635 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:24.521 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:24.521 Waiting for block devices as requested 00:21:24.521 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:21:24.521 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:21:24.521 05:20:14 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:21:24.521 05:20:14 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:24.521 05:20:14 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:21:24.521 05:20:14 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:21:24.521 05:20:14 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:21:24.521 No valid GPT data, bailing 00:21:24.521 05:20:14 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:24.521 05:20:14 -- scripts/common.sh@393 -- # pt= 00:21:24.521 05:20:14 -- scripts/common.sh@394 -- # return 1 00:21:24.521 05:20:14 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:21:24.521 05:20:14 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:21:24.521 05:20:14 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:21:24.521 05:20:14 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:21:24.521 05:20:14 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:21:24.521 05:20:14 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:21:24.779 No valid GPT data, bailing 00:21:24.779 05:20:14 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:21:24.779 05:20:14 -- scripts/common.sh@393 -- # pt= 00:21:24.779 05:20:14 -- scripts/common.sh@394 -- # return 1 00:21:24.779 05:20:14 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:21:24.779 05:20:14 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:21:24.779 05:20:14 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n2 ]] 00:21:24.779 05:20:14 -- nvmf/common.sh@640 -- # block_in_use nvme1n2 00:21:24.779 05:20:14 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:21:24.779 05:20:14 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:21:24.779 No valid GPT data, bailing 00:21:24.779 05:20:14 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:21:24.779 05:20:14 -- scripts/common.sh@393 -- # pt= 00:21:24.779 05:20:14 -- scripts/common.sh@394 -- # return 1 00:21:24.780 05:20:14 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n2 00:21:24.780 05:20:14 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:21:24.780 05:20:14 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n3 ]] 00:21:24.780 05:20:14 -- nvmf/common.sh@640 -- # block_in_use nvme1n3 00:21:24.780 05:20:14 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:21:24.780 05:20:14 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:21:24.780 No valid GPT data, bailing 00:21:24.780 05:20:14 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:21:24.780 05:20:14 -- scripts/common.sh@393 -- # pt= 00:21:24.780 05:20:14 -- scripts/common.sh@394 -- # return 1 00:21:24.780 05:20:14 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n3 00:21:24.780 05:20:14 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n3 ]] 00:21:24.780 05:20:14 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:21:24.780 05:20:14 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:21:24.780 05:20:14 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:24.780 05:20:14 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:21:24.780 05:20:14 -- nvmf/common.sh@654 -- # echo 1 00:21:24.780 05:20:14 -- nvmf/common.sh@655 -- # echo /dev/nvme1n3 00:21:24.780 05:20:14 -- nvmf/common.sh@656 -- # echo 1 00:21:24.780 05:20:14 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:21:24.780 05:20:14 -- nvmf/common.sh@663 -- # echo tcp 00:21:24.780 05:20:14 -- nvmf/common.sh@664 -- # echo 4420 00:21:24.780 05:20:14 -- nvmf/common.sh@665 -- # echo ipv4 00:21:24.780 05:20:14 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:25.038 05:20:14 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 --hostid=bfe11ee8-aac0-4eb2-9e49-c15a5b73de32 -a 10.0.0.1 -t tcp -s 4420 00:21:25.038 00:21:25.038 Discovery Log Number of Records 2, Generation counter 2 00:21:25.038 =====Discovery Log Entry 0====== 00:21:25.038 trtype: tcp 00:21:25.038 adrfam: ipv4 00:21:25.038 subtype: current discovery subsystem 00:21:25.038 treq: not specified, sq flow control disable supported 00:21:25.038 portid: 1 00:21:25.038 trsvcid: 4420 00:21:25.038 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:25.038 traddr: 10.0.0.1 00:21:25.038 eflags: none 00:21:25.038 sectype: none 00:21:25.038 =====Discovery Log Entry 1====== 00:21:25.038 trtype: tcp 00:21:25.038 adrfam: ipv4 00:21:25.038 subtype: nvme subsystem 00:21:25.038 treq: not specified, sq flow control disable supported 00:21:25.038 portid: 1 00:21:25.038 trsvcid: 4420 00:21:25.038 subnqn: kernel_target 00:21:25.038 traddr: 10.0.0.1 00:21:25.038 eflags: none 00:21:25.038 sectype: none 00:21:25.038 05:20:14 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:21:25.038 05:20:14 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:25.038 05:20:14 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:25.038 05:20:14 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:21:25.038 05:20:14 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:25.038 05:20:14 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:21:25.038 05:20:14 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:25.038 05:20:14 -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:25.038 05:20:14 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:25.038 05:20:14 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:25.038 05:20:14 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:25.038 05:20:14 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:25.038 05:20:14 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:25.038 05:20:14 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:25.038 05:20:14 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:21:25.038 05:20:14 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:25.038 05:20:14 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:21:25.038 05:20:14 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:25.038 05:20:14 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:21:25.038 05:20:14 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:25.038 05:20:14 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:21:28.349 Initializing NVMe Controllers 00:21:28.349 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:21:28.349 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:21:28.349 Initialization complete. Launching workers. 00:21:28.349 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 33952, failed: 0 00:21:28.349 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 33952, failed to submit 0 00:21:28.349 success 0, unsuccess 33952, failed 0 00:21:28.349 05:20:17 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:28.349 05:20:17 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:21:31.627 Initializing NVMe Controllers 00:21:31.627 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:21:31.627 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:21:31.627 Initialization complete. Launching workers. 00:21:31.627 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 68867, failed: 0 00:21:31.627 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 30253, failed to submit 38614 00:21:31.627 success 0, unsuccess 30253, failed 0 00:21:31.627 05:20:20 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:31.627 05:20:20 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:21:34.906 Initializing NVMe Controllers 00:21:34.906 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:21:34.906 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:21:34.906 Initialization complete. Launching workers. 00:21:34.906 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 82630, failed: 0 00:21:34.906 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 20634, failed to submit 61996 00:21:34.906 success 0, unsuccess 20634, failed 0 00:21:34.906 05:20:24 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:21:34.907 05:20:24 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:21:34.907 05:20:24 -- nvmf/common.sh@677 -- # echo 0 00:21:34.907 05:20:24 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:21:34.907 05:20:24 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:21:34.907 05:20:24 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:34.907 05:20:24 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:21:34.907 05:20:24 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:21:34.907 05:20:24 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:21:34.907 00:21:34.907 real 0m10.508s 00:21:34.907 user 0m6.001s 00:21:34.907 sys 0m1.989s 00:21:34.907 05:20:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:34.907 05:20:24 -- common/autotest_common.sh@10 -- # set +x 00:21:34.907 ************************************ 00:21:34.907 END TEST kernel_target_abort 00:21:34.907 ************************************ 00:21:34.907 05:20:24 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:21:34.907 05:20:24 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:21:34.907 05:20:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:34.907 05:20:24 -- nvmf/common.sh@116 -- # sync 00:21:34.907 05:20:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:34.907 05:20:24 -- nvmf/common.sh@119 -- # set +e 00:21:34.907 05:20:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:34.907 05:20:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:34.907 rmmod nvme_tcp 00:21:34.907 rmmod nvme_fabrics 00:21:34.907 rmmod nvme_keyring 00:21:34.907 05:20:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:34.907 05:20:24 -- nvmf/common.sh@123 -- # set -e 00:21:34.907 05:20:24 -- nvmf/common.sh@124 -- # return 0 00:21:34.907 05:20:24 -- nvmf/common.sh@477 -- # '[' -n 87995 ']' 00:21:34.907 05:20:24 -- nvmf/common.sh@478 -- # killprocess 87995 00:21:34.907 05:20:24 -- common/autotest_common.sh@936 -- # '[' -z 87995 ']' 00:21:34.907 05:20:24 -- common/autotest_common.sh@940 -- # kill -0 87995 00:21:34.907 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (87995) - No such process 00:21:34.907 Process with pid 87995 is not found 00:21:34.907 05:20:24 -- common/autotest_common.sh@963 -- # echo 'Process with pid 87995 is not found' 00:21:34.907 05:20:24 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:21:34.907 05:20:24 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:35.165 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:35.428 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:21:35.428 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:21:35.428 05:20:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:35.428 05:20:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:35.428 05:20:25 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:35.428 05:20:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:35.428 05:20:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:35.428 05:20:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:35.428 05:20:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:35.428 05:20:25 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:35.428 00:21:35.428 real 0m24.286s 00:21:35.428 user 0m49.949s 00:21:35.428 sys 0m5.262s 00:21:35.428 05:20:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:35.428 05:20:25 -- common/autotest_common.sh@10 -- # set +x 00:21:35.428 ************************************ 00:21:35.428 END TEST nvmf_abort_qd_sizes 00:21:35.428 ************************************ 00:21:35.428 05:20:25 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:21:35.428 05:20:25 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:21:35.428 05:20:25 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:21:35.428 05:20:25 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:21:35.428 05:20:25 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:21:35.428 05:20:25 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:21:35.428 05:20:25 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:21:35.428 05:20:25 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:21:35.428 05:20:25 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:21:35.428 05:20:25 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:21:35.428 05:20:25 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:21:35.428 05:20:25 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:21:35.428 05:20:25 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:21:35.428 05:20:25 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:21:35.428 05:20:25 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:21:35.428 05:20:25 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:21:35.428 05:20:25 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:21:35.428 05:20:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:35.428 05:20:25 -- common/autotest_common.sh@10 -- # set +x 00:21:35.428 05:20:25 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:21:35.428 05:20:25 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:21:35.428 05:20:25 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:21:35.428 05:20:25 -- common/autotest_common.sh@10 -- # set +x 00:21:37.327 INFO: APP EXITING 00:21:37.327 INFO: killing all VMs 00:21:37.327 INFO: killing vhost app 00:21:37.327 INFO: EXIT DONE 00:21:37.585 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:37.585 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:21:37.585 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:21:38.154 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:38.412 Cleaning 00:21:38.412 Removing: /var/run/dpdk/spdk0/config 00:21:38.412 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:21:38.412 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:21:38.412 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:21:38.412 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:21:38.412 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:21:38.412 Removing: /var/run/dpdk/spdk0/hugepage_info 00:21:38.412 Removing: /var/run/dpdk/spdk1/config 00:21:38.412 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:21:38.412 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:21:38.412 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:21:38.412 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:21:38.412 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:21:38.412 Removing: /var/run/dpdk/spdk1/hugepage_info 00:21:38.412 Removing: /var/run/dpdk/spdk2/config 00:21:38.412 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:21:38.412 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:21:38.412 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:21:38.412 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:21:38.412 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:21:38.412 Removing: /var/run/dpdk/spdk2/hugepage_info 00:21:38.412 Removing: /var/run/dpdk/spdk3/config 00:21:38.412 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:21:38.412 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:21:38.412 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:21:38.412 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:21:38.412 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:21:38.412 Removing: /var/run/dpdk/spdk3/hugepage_info 00:21:38.412 Removing: /var/run/dpdk/spdk4/config 00:21:38.412 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:21:38.412 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:21:38.412 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:21:38.412 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:21:38.412 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:21:38.412 Removing: /var/run/dpdk/spdk4/hugepage_info 00:21:38.412 Removing: /dev/shm/nvmf_trace.0 00:21:38.412 Removing: /dev/shm/spdk_tgt_trace.pid65888 00:21:38.412 Removing: /var/run/dpdk/spdk0 00:21:38.412 Removing: /var/run/dpdk/spdk1 00:21:38.412 Removing: /var/run/dpdk/spdk2 00:21:38.412 Removing: /var/run/dpdk/spdk3 00:21:38.412 Removing: /var/run/dpdk/spdk4 00:21:38.412 Removing: /var/run/dpdk/spdk_pid65742 00:21:38.412 Removing: /var/run/dpdk/spdk_pid65888 00:21:38.412 Removing: /var/run/dpdk/spdk_pid66141 00:21:38.412 Removing: /var/run/dpdk/spdk_pid66332 00:21:38.412 Removing: /var/run/dpdk/spdk_pid66479 00:21:38.412 Removing: /var/run/dpdk/spdk_pid66551 00:21:38.412 Removing: /var/run/dpdk/spdk_pid66634 00:21:38.412 Removing: /var/run/dpdk/spdk_pid66732 00:21:38.412 Removing: /var/run/dpdk/spdk_pid66816 00:21:38.412 Removing: /var/run/dpdk/spdk_pid66849 00:21:38.412 Removing: /var/run/dpdk/spdk_pid66879 00:21:38.412 Removing: /var/run/dpdk/spdk_pid66948 00:21:38.412 Removing: /var/run/dpdk/spdk_pid67047 00:21:38.412 Removing: /var/run/dpdk/spdk_pid67478 00:21:38.412 Removing: /var/run/dpdk/spdk_pid67526 00:21:38.412 Removing: /var/run/dpdk/spdk_pid67571 00:21:38.412 Removing: /var/run/dpdk/spdk_pid67587 00:21:38.412 Removing: /var/run/dpdk/spdk_pid67649 00:21:38.412 Removing: /var/run/dpdk/spdk_pid67665 00:21:38.412 Removing: /var/run/dpdk/spdk_pid67725 00:21:38.412 Removing: /var/run/dpdk/spdk_pid67741 00:21:38.412 Removing: /var/run/dpdk/spdk_pid67788 00:21:38.412 Removing: /var/run/dpdk/spdk_pid67806 00:21:38.412 Removing: /var/run/dpdk/spdk_pid67846 00:21:38.412 Removing: /var/run/dpdk/spdk_pid67864 00:21:38.412 Removing: /var/run/dpdk/spdk_pid67990 00:21:38.412 Removing: /var/run/dpdk/spdk_pid68032 00:21:38.412 Removing: /var/run/dpdk/spdk_pid68108 00:21:38.670 Removing: /var/run/dpdk/spdk_pid68160 00:21:38.670 Removing: /var/run/dpdk/spdk_pid68184 00:21:38.670 Removing: /var/run/dpdk/spdk_pid68237 00:21:38.670 Removing: /var/run/dpdk/spdk_pid68262 00:21:38.670 Removing: /var/run/dpdk/spdk_pid68291 00:21:38.670 Removing: /var/run/dpdk/spdk_pid68305 00:21:38.670 Removing: /var/run/dpdk/spdk_pid68345 00:21:38.670 Removing: /var/run/dpdk/spdk_pid68359 00:21:38.670 Removing: /var/run/dpdk/spdk_pid68388 00:21:38.670 Removing: /var/run/dpdk/spdk_pid68410 00:21:38.670 Removing: /var/run/dpdk/spdk_pid68444 00:21:38.670 Removing: /var/run/dpdk/spdk_pid68458 00:21:38.670 Removing: /var/run/dpdk/spdk_pid68493 00:21:38.670 Removing: /var/run/dpdk/spdk_pid68512 00:21:38.670 Removing: /var/run/dpdk/spdk_pid68541 00:21:38.670 Removing: /var/run/dpdk/spdk_pid68561 00:21:38.670 Removing: /var/run/dpdk/spdk_pid68595 00:21:38.670 Removing: /var/run/dpdk/spdk_pid68609 00:21:38.670 Removing: /var/run/dpdk/spdk_pid68644 00:21:38.670 Removing: /var/run/dpdk/spdk_pid68658 00:21:38.670 Removing: /var/run/dpdk/spdk_pid68692 00:21:38.670 Removing: /var/run/dpdk/spdk_pid68712 00:21:38.670 Removing: /var/run/dpdk/spdk_pid68741 00:21:38.670 Removing: /var/run/dpdk/spdk_pid68760 00:21:38.670 Removing: /var/run/dpdk/spdk_pid68795 00:21:38.670 Removing: /var/run/dpdk/spdk_pid68809 00:21:38.670 Removing: /var/run/dpdk/spdk_pid68843 00:21:38.670 Removing: /var/run/dpdk/spdk_pid68863 00:21:38.670 Removing: /var/run/dpdk/spdk_pid68892 00:21:38.670 Removing: /var/run/dpdk/spdk_pid68906 00:21:38.670 Removing: /var/run/dpdk/spdk_pid68946 00:21:38.670 Removing: /var/run/dpdk/spdk_pid68960 00:21:38.670 Removing: /var/run/dpdk/spdk_pid68992 00:21:38.670 Removing: /var/run/dpdk/spdk_pid69017 00:21:38.670 Removing: /var/run/dpdk/spdk_pid69046 00:21:38.670 Removing: /var/run/dpdk/spdk_pid69063 00:21:38.670 Removing: /var/run/dpdk/spdk_pid69106 00:21:38.670 Removing: /var/run/dpdk/spdk_pid69123 00:21:38.670 Removing: /var/run/dpdk/spdk_pid69161 00:21:38.670 Removing: /var/run/dpdk/spdk_pid69175 00:21:38.670 Removing: /var/run/dpdk/spdk_pid69209 00:21:38.670 Removing: /var/run/dpdk/spdk_pid69229 00:21:38.670 Removing: /var/run/dpdk/spdk_pid69259 00:21:38.670 Removing: /var/run/dpdk/spdk_pid69336 00:21:38.670 Removing: /var/run/dpdk/spdk_pid69423 00:21:38.670 Removing: /var/run/dpdk/spdk_pid69755 00:21:38.670 Removing: /var/run/dpdk/spdk_pid69767 00:21:38.670 Removing: /var/run/dpdk/spdk_pid69803 00:21:38.670 Removing: /var/run/dpdk/spdk_pid69816 00:21:38.670 Removing: /var/run/dpdk/spdk_pid69829 00:21:38.670 Removing: /var/run/dpdk/spdk_pid69847 00:21:38.670 Removing: /var/run/dpdk/spdk_pid69860 00:21:38.670 Removing: /var/run/dpdk/spdk_pid69873 00:21:38.670 Removing: /var/run/dpdk/spdk_pid69896 00:21:38.670 Removing: /var/run/dpdk/spdk_pid69904 00:21:38.670 Removing: /var/run/dpdk/spdk_pid69923 00:21:38.670 Removing: /var/run/dpdk/spdk_pid69941 00:21:38.670 Removing: /var/run/dpdk/spdk_pid69948 00:21:38.670 Removing: /var/run/dpdk/spdk_pid69967 00:21:38.670 Removing: /var/run/dpdk/spdk_pid69985 00:21:38.670 Removing: /var/run/dpdk/spdk_pid69992 00:21:38.670 Removing: /var/run/dpdk/spdk_pid70010 00:21:38.670 Removing: /var/run/dpdk/spdk_pid70024 00:21:38.670 Removing: /var/run/dpdk/spdk_pid70036 00:21:38.670 Removing: /var/run/dpdk/spdk_pid70044 00:21:38.670 Removing: /var/run/dpdk/spdk_pid70079 00:21:38.670 Removing: /var/run/dpdk/spdk_pid70092 00:21:38.670 Removing: /var/run/dpdk/spdk_pid70119 00:21:38.670 Removing: /var/run/dpdk/spdk_pid70189 00:21:38.670 Removing: /var/run/dpdk/spdk_pid70210 00:21:38.670 Removing: /var/run/dpdk/spdk_pid70220 00:21:38.670 Removing: /var/run/dpdk/spdk_pid70248 00:21:38.670 Removing: /var/run/dpdk/spdk_pid70252 00:21:38.670 Removing: /var/run/dpdk/spdk_pid70265 00:21:38.670 Removing: /var/run/dpdk/spdk_pid70300 00:21:38.670 Removing: /var/run/dpdk/spdk_pid70312 00:21:38.670 Removing: /var/run/dpdk/spdk_pid70338 00:21:38.671 Removing: /var/run/dpdk/spdk_pid70346 00:21:38.671 Removing: /var/run/dpdk/spdk_pid70353 00:21:38.671 Removing: /var/run/dpdk/spdk_pid70355 00:21:38.671 Removing: /var/run/dpdk/spdk_pid70367 00:21:38.671 Removing: /var/run/dpdk/spdk_pid70370 00:21:38.671 Removing: /var/run/dpdk/spdk_pid70378 00:21:38.671 Removing: /var/run/dpdk/spdk_pid70385 00:21:38.671 Removing: /var/run/dpdk/spdk_pid70412 00:21:38.671 Removing: /var/run/dpdk/spdk_pid70438 00:21:38.671 Removing: /var/run/dpdk/spdk_pid70448 00:21:38.671 Removing: /var/run/dpdk/spdk_pid70476 00:21:38.671 Removing: /var/run/dpdk/spdk_pid70480 00:21:38.929 Removing: /var/run/dpdk/spdk_pid70488 00:21:38.929 Removing: /var/run/dpdk/spdk_pid70528 00:21:38.929 Removing: /var/run/dpdk/spdk_pid70540 00:21:38.929 Removing: /var/run/dpdk/spdk_pid70566 00:21:38.929 Removing: /var/run/dpdk/spdk_pid70574 00:21:38.929 Removing: /var/run/dpdk/spdk_pid70581 00:21:38.929 Removing: /var/run/dpdk/spdk_pid70589 00:21:38.929 Removing: /var/run/dpdk/spdk_pid70591 00:21:38.929 Removing: /var/run/dpdk/spdk_pid70604 00:21:38.929 Removing: /var/run/dpdk/spdk_pid70606 00:21:38.929 Removing: /var/run/dpdk/spdk_pid70618 00:21:38.929 Removing: /var/run/dpdk/spdk_pid70689 00:21:38.929 Removing: /var/run/dpdk/spdk_pid70742 00:21:38.929 Removing: /var/run/dpdk/spdk_pid70848 00:21:38.929 Removing: /var/run/dpdk/spdk_pid70887 00:21:38.929 Removing: /var/run/dpdk/spdk_pid70929 00:21:38.929 Removing: /var/run/dpdk/spdk_pid70938 00:21:38.929 Removing: /var/run/dpdk/spdk_pid70958 00:21:38.929 Removing: /var/run/dpdk/spdk_pid70978 00:21:38.929 Removing: /var/run/dpdk/spdk_pid71002 00:21:38.929 Removing: /var/run/dpdk/spdk_pid71022 00:21:38.929 Removing: /var/run/dpdk/spdk_pid71098 00:21:38.929 Removing: /var/run/dpdk/spdk_pid71107 00:21:38.929 Removing: /var/run/dpdk/spdk_pid71155 00:21:38.929 Removing: /var/run/dpdk/spdk_pid71228 00:21:38.929 Removing: /var/run/dpdk/spdk_pid71292 00:21:38.929 Removing: /var/run/dpdk/spdk_pid71320 00:21:38.929 Removing: /var/run/dpdk/spdk_pid71413 00:21:38.929 Removing: /var/run/dpdk/spdk_pid71459 00:21:38.929 Removing: /var/run/dpdk/spdk_pid71485 00:21:38.929 Removing: /var/run/dpdk/spdk_pid71714 00:21:38.929 Removing: /var/run/dpdk/spdk_pid71801 00:21:38.929 Removing: /var/run/dpdk/spdk_pid71828 00:21:38.929 Removing: /var/run/dpdk/spdk_pid72157 00:21:38.929 Removing: /var/run/dpdk/spdk_pid72200 00:21:38.929 Removing: /var/run/dpdk/spdk_pid72517 00:21:38.929 Removing: /var/run/dpdk/spdk_pid72928 00:21:38.929 Removing: /var/run/dpdk/spdk_pid73214 00:21:38.929 Removing: /var/run/dpdk/spdk_pid74020 00:21:38.929 Removing: /var/run/dpdk/spdk_pid74909 00:21:38.929 Removing: /var/run/dpdk/spdk_pid75022 00:21:38.929 Removing: /var/run/dpdk/spdk_pid75090 00:21:38.929 Removing: /var/run/dpdk/spdk_pid76367 00:21:38.929 Removing: /var/run/dpdk/spdk_pid76590 00:21:38.929 Removing: /var/run/dpdk/spdk_pid76927 00:21:38.929 Removing: /var/run/dpdk/spdk_pid77044 00:21:38.929 Removing: /var/run/dpdk/spdk_pid77169 00:21:38.929 Removing: /var/run/dpdk/spdk_pid77180 00:21:38.929 Removing: /var/run/dpdk/spdk_pid77213 00:21:38.929 Removing: /var/run/dpdk/spdk_pid77228 00:21:38.929 Removing: /var/run/dpdk/spdk_pid77329 00:21:38.929 Removing: /var/run/dpdk/spdk_pid77452 00:21:38.929 Removing: /var/run/dpdk/spdk_pid77592 00:21:38.929 Removing: /var/run/dpdk/spdk_pid77664 00:21:38.929 Removing: /var/run/dpdk/spdk_pid78056 00:21:38.929 Removing: /var/run/dpdk/spdk_pid78411 00:21:38.929 Removing: /var/run/dpdk/spdk_pid78413 00:21:38.929 Removing: /var/run/dpdk/spdk_pid80602 00:21:38.929 Removing: /var/run/dpdk/spdk_pid80604 00:21:38.929 Removing: /var/run/dpdk/spdk_pid80893 00:21:38.929 Removing: /var/run/dpdk/spdk_pid80914 00:21:38.929 Removing: /var/run/dpdk/spdk_pid80928 00:21:38.929 Removing: /var/run/dpdk/spdk_pid80960 00:21:38.929 Removing: /var/run/dpdk/spdk_pid80965 00:21:38.929 Removing: /var/run/dpdk/spdk_pid81059 00:21:38.929 Removing: /var/run/dpdk/spdk_pid81067 00:21:38.929 Removing: /var/run/dpdk/spdk_pid81175 00:21:38.929 Removing: /var/run/dpdk/spdk_pid81181 00:21:38.929 Removing: /var/run/dpdk/spdk_pid81285 00:21:38.929 Removing: /var/run/dpdk/spdk_pid81291 00:21:38.929 Removing: /var/run/dpdk/spdk_pid81704 00:21:38.929 Removing: /var/run/dpdk/spdk_pid81747 00:21:38.929 Removing: /var/run/dpdk/spdk_pid81862 00:21:38.929 Removing: /var/run/dpdk/spdk_pid81946 00:21:38.929 Removing: /var/run/dpdk/spdk_pid82266 00:21:38.929 Removing: /var/run/dpdk/spdk_pid82468 00:21:38.929 Removing: /var/run/dpdk/spdk_pid82852 00:21:38.929 Removing: /var/run/dpdk/spdk_pid83377 00:21:38.929 Removing: /var/run/dpdk/spdk_pid83821 00:21:38.929 Removing: /var/run/dpdk/spdk_pid83868 00:21:38.929 Removing: /var/run/dpdk/spdk_pid83921 00:21:38.929 Removing: /var/run/dpdk/spdk_pid83976 00:21:38.929 Removing: /var/run/dpdk/spdk_pid84076 00:21:38.929 Removing: /var/run/dpdk/spdk_pid84123 00:21:38.929 Removing: /var/run/dpdk/spdk_pid84190 00:21:38.929 Removing: /var/run/dpdk/spdk_pid84237 00:21:38.929 Removing: /var/run/dpdk/spdk_pid84565 00:21:38.929 Removing: /var/run/dpdk/spdk_pid85756 00:21:38.929 Removing: /var/run/dpdk/spdk_pid85908 00:21:38.929 Removing: /var/run/dpdk/spdk_pid86151 00:21:39.193 Removing: /var/run/dpdk/spdk_pid86706 00:21:39.193 Removing: /var/run/dpdk/spdk_pid86860 00:21:39.193 Removing: /var/run/dpdk/spdk_pid87023 00:21:39.193 Removing: /var/run/dpdk/spdk_pid87120 00:21:39.193 Removing: /var/run/dpdk/spdk_pid87282 00:21:39.193 Removing: /var/run/dpdk/spdk_pid87385 00:21:39.193 Removing: /var/run/dpdk/spdk_pid88052 00:21:39.193 Removing: /var/run/dpdk/spdk_pid88087 00:21:39.193 Removing: /var/run/dpdk/spdk_pid88122 00:21:39.193 Removing: /var/run/dpdk/spdk_pid88371 00:21:39.193 Removing: /var/run/dpdk/spdk_pid88402 00:21:39.193 Removing: /var/run/dpdk/spdk_pid88437 00:21:39.193 Clean 00:21:39.193 killing process with pid 60105 00:21:39.193 killing process with pid 60108 00:21:39.193 05:20:28 -- common/autotest_common.sh@1446 -- # return 0 00:21:39.193 05:20:28 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:21:39.193 05:20:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:39.193 05:20:28 -- common/autotest_common.sh@10 -- # set +x 00:21:39.193 05:20:28 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:21:39.193 05:20:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:39.193 05:20:28 -- common/autotest_common.sh@10 -- # set +x 00:21:39.193 05:20:28 -- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:39.193 05:20:28 -- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:21:39.193 05:20:28 -- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:21:39.193 05:20:28 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:21:39.193 05:20:28 -- spdk/autotest.sh@383 -- # hostname 00:21:39.193 05:20:28 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:21:39.461 geninfo: WARNING: invalid characters removed from testname! 00:22:06.010 05:20:54 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:09.290 05:20:58 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:11.862 05:21:01 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:14.390 05:21:03 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:17.668 05:21:06 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:20.196 05:21:09 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:22.725 05:21:11 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:22:22.725 05:21:12 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:22:22.725 05:21:12 -- common/autotest_common.sh@1690 -- $ lcov --version 00:22:22.725 05:21:12 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:22:22.725 05:21:12 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:22:22.725 05:21:12 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:22:22.725 05:21:12 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:22:22.725 05:21:12 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:22:22.725 05:21:12 -- scripts/common.sh@335 -- $ IFS=.-: 00:22:22.725 05:21:12 -- scripts/common.sh@335 -- $ read -ra ver1 00:22:22.725 05:21:12 -- scripts/common.sh@336 -- $ IFS=.-: 00:22:22.725 05:21:12 -- scripts/common.sh@336 -- $ read -ra ver2 00:22:22.725 05:21:12 -- scripts/common.sh@337 -- $ local 'op=<' 00:22:22.725 05:21:12 -- scripts/common.sh@339 -- $ ver1_l=2 00:22:22.725 05:21:12 -- scripts/common.sh@340 -- $ ver2_l=1 00:22:22.725 05:21:12 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:22:22.725 05:21:12 -- scripts/common.sh@343 -- $ case "$op" in 00:22:22.725 05:21:12 -- scripts/common.sh@344 -- $ : 1 00:22:22.725 05:21:12 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:22:22.725 05:21:12 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:22.725 05:21:12 -- scripts/common.sh@364 -- $ decimal 1 00:22:22.725 05:21:12 -- scripts/common.sh@352 -- $ local d=1 00:22:22.725 05:21:12 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:22:22.725 05:21:12 -- scripts/common.sh@354 -- $ echo 1 00:22:22.725 05:21:12 -- scripts/common.sh@364 -- $ ver1[v]=1 00:22:22.725 05:21:12 -- scripts/common.sh@365 -- $ decimal 2 00:22:22.725 05:21:12 -- scripts/common.sh@352 -- $ local d=2 00:22:22.725 05:21:12 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:22:22.725 05:21:12 -- scripts/common.sh@354 -- $ echo 2 00:22:22.725 05:21:12 -- scripts/common.sh@365 -- $ ver2[v]=2 00:22:22.725 05:21:12 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:22:22.725 05:21:12 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:22:22.725 05:21:12 -- scripts/common.sh@367 -- $ return 0 00:22:22.725 05:21:12 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:22.725 05:21:12 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:22:22.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:22.725 --rc genhtml_branch_coverage=1 00:22:22.725 --rc genhtml_function_coverage=1 00:22:22.725 --rc genhtml_legend=1 00:22:22.725 --rc geninfo_all_blocks=1 00:22:22.725 --rc geninfo_unexecuted_blocks=1 00:22:22.725 00:22:22.725 ' 00:22:22.725 05:21:12 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:22:22.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:22.725 --rc genhtml_branch_coverage=1 00:22:22.725 --rc genhtml_function_coverage=1 00:22:22.725 --rc genhtml_legend=1 00:22:22.725 --rc geninfo_all_blocks=1 00:22:22.725 --rc geninfo_unexecuted_blocks=1 00:22:22.725 00:22:22.725 ' 00:22:22.725 05:21:12 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:22:22.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:22.725 --rc genhtml_branch_coverage=1 00:22:22.725 --rc genhtml_function_coverage=1 00:22:22.725 --rc genhtml_legend=1 00:22:22.725 --rc geninfo_all_blocks=1 00:22:22.725 --rc geninfo_unexecuted_blocks=1 00:22:22.725 00:22:22.725 ' 00:22:22.725 05:21:12 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:22:22.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:22.725 --rc genhtml_branch_coverage=1 00:22:22.725 --rc genhtml_function_coverage=1 00:22:22.725 --rc genhtml_legend=1 00:22:22.725 --rc geninfo_all_blocks=1 00:22:22.725 --rc geninfo_unexecuted_blocks=1 00:22:22.725 00:22:22.725 ' 00:22:22.725 05:21:12 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:22.725 05:21:12 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:22:22.725 05:21:12 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:22.725 05:21:12 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:22.725 05:21:12 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.725 05:21:12 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.725 05:21:12 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.725 05:21:12 -- paths/export.sh@5 -- $ export PATH 00:22:22.725 05:21:12 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.725 05:21:12 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:22:22.725 05:21:12 -- common/autobuild_common.sh@440 -- $ date +%s 00:22:22.725 05:21:12 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1733635272.XXXXXX 00:22:22.725 05:21:12 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1733635272.rA7pI3 00:22:22.725 05:21:12 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:22:22.725 05:21:12 -- common/autobuild_common.sh@446 -- $ '[' -n v23.11 ']' 00:22:22.725 05:21:12 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:22:22.725 05:21:12 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:22:22.725 05:21:12 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:22:22.725 05:21:12 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:22:22.725 05:21:12 -- common/autobuild_common.sh@456 -- $ get_config_params 00:22:22.725 05:21:12 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:22:22.725 05:21:12 -- common/autotest_common.sh@10 -- $ set +x 00:22:22.726 05:21:12 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:22:22.726 05:21:12 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:22:22.726 05:21:12 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:22:22.726 05:21:12 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:22:22.726 05:21:12 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:22:22.726 05:21:12 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:22:22.726 05:21:12 -- spdk/autopackage.sh@19 -- $ timing_finish 00:22:22.726 05:21:12 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:22:22.726 05:21:12 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:22:22.726 05:21:12 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:22.726 05:21:12 -- spdk/autopackage.sh@20 -- $ exit 0 00:22:22.726 + [[ -n 5969 ]] 00:22:22.726 + sudo kill 5969 00:22:22.734 [Pipeline] } 00:22:22.751 [Pipeline] // timeout 00:22:22.757 [Pipeline] } 00:22:22.772 [Pipeline] // stage 00:22:22.777 [Pipeline] } 00:22:22.791 [Pipeline] // catchError 00:22:22.800 [Pipeline] stage 00:22:22.802 [Pipeline] { (Stop VM) 00:22:22.814 [Pipeline] sh 00:22:23.092 + vagrant halt 00:22:27.275 ==> default: Halting domain... 00:22:32.547 [Pipeline] sh 00:22:32.826 + vagrant destroy -f 00:22:37.009 ==> default: Removing domain... 00:22:37.019 [Pipeline] sh 00:22:37.315 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:22:37.322 [Pipeline] } 00:22:37.334 [Pipeline] // stage 00:22:37.338 [Pipeline] } 00:22:37.350 [Pipeline] // dir 00:22:37.354 [Pipeline] } 00:22:37.367 [Pipeline] // wrap 00:22:37.372 [Pipeline] } 00:22:37.381 [Pipeline] // catchError 00:22:37.388 [Pipeline] stage 00:22:37.390 [Pipeline] { (Epilogue) 00:22:37.401 [Pipeline] sh 00:22:37.679 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:22:44.255 [Pipeline] catchError 00:22:44.257 [Pipeline] { 00:22:44.269 [Pipeline] sh 00:22:44.547 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:22:44.805 Artifacts sizes are good 00:22:44.813 [Pipeline] } 00:22:44.829 [Pipeline] // catchError 00:22:44.842 [Pipeline] archiveArtifacts 00:22:44.850 Archiving artifacts 00:22:45.006 [Pipeline] cleanWs 00:22:45.017 [WS-CLEANUP] Deleting project workspace... 00:22:45.017 [WS-CLEANUP] Deferred wipeout is used... 00:22:45.023 [WS-CLEANUP] done 00:22:45.024 [Pipeline] } 00:22:45.039 [Pipeline] // stage 00:22:45.044 [Pipeline] } 00:22:45.057 [Pipeline] // node 00:22:45.062 [Pipeline] End of Pipeline 00:22:45.120 Finished: SUCCESS